I see these challenges as a great way for excellent experienced developers to weed out incompetent companies.
I'm a kick-ass get-things-done full-stack web engineer. I've never had to deal with one of these sorts of problems in my day to day work; and if I did, I'd just find an existing, tested, stable library that already handled them.
A company that needs someone to solve these sorts of problems doesn't want me on their team in the first place, nor would I thrive there. A company that just needs to build damn good web apps is losing out by using these sorts of questions in their interviews.
The best interview challenge I've had (actually, it was a take-home, with discussion in the interview proper) was about designing code for re-use and extension. It was a great indicator of the company's practical and mature approach to engineering, and of what they really wanted this hire to accomplish.
> I'm a kick-ass get-things-done full-stack web engineer.
And modest, too. If an engineer gave me your answer ("I never learned the principle because I never had to") I would know they aren't a fit for my team.
No, it's the attitude. Saying "I don't know depth first search" is fine, saying "I'll never need this and by asking it you've revealed what a terrible company you are" is sour grapes.
Not revealed as a terrible company, perhaps, but as a terrible interviewer.
If any company were to quiz me on algorithmic basics, it had better explain to me beforehand why it is among the x% of all hiring companies that actually need to roll their own new solutions in the face of so many well-established libraries.
That is, before you ask me to demonstrate a depth-first search, you had better explain to me why I'm going to need to be doing that instead of just writing an SQL query and tweaking an index, which is likely what I would be doing at most companies.
Part of development is figuring out not just the answers to the questions, but also figuring out "Of all the questions I could have been asked, why was I asked that question?"
A disappointingly large fraction of the time, the answer to "why did you ask that question?" is "we noticed a correlation, confused it for causation, and built an entire strategy around it".
> need to roll their own new solutions in the face of so many well-established libraries
I'm not defending interview quiz-time, but it isn't (or shouldn't be) about rolling your own solution. It's about understanding concepts. Understanding the basics of time/space complexity are pretty fundamental when designing systems.
Developers frequently encounter hashing and (probably less-often) trees, so I don't see an issue with asking something like "Why/how/when is a hash-based lookup faster <in some situation> than a tree-based lookup?" (as one question during an interview). If a person can give a good answer to this, they'll pretty easily be able to understand the performance tradeoffs of hash vs btree indexes when using mysql (and this same tradeoff in a variety of other situations).
If you're measuring my design ability in the interview, you had better give me an opportunity to use that ability after being hired.
Too many times have I been asked questions that shaped my expectations of the job, only to be disappointed later on. The worst offender in this respect put me through a technical screen that could only reasonably be passed by someone with a bachelor's degree in CompSci or equivalent work experience, only to later tell me that the code I write "should be understandable by a kid fresh out of high school" (actual quote). I applied to 12 different job postings that night.
The interview led me to believe that I was being hired for my expertise, and the job expectation was actually to be a warm, brainless body in a formerly empty seat. That's why I don't like questions that act as proxies for some other metric. The questions you ask me are telling me about you as much as my answers tell you about me. When you cargo-cult interview procedures from another company, you are actually misrepresenting the nature of your own company as being like the company you stole your interviews from.
Not quite. I was probably actually replaced by someone fresh out of college, who failed to learn anything during that 5 years. I am 99% certain the contract requirement mandated bachelor's degrees (and in a relevant field) for all software developers.
> "before you ask me to demonstrate a depth-first search, you had better explain to me why I'm going to need to be doing that instead of just writing an SQL query and tweaking an index"
How I am going to ask you to tweak an index, if you don't know how to browse a tree? Although I would have asked you about B+ and B* trees, the ones used to index a database. There are differences in those trees and you need to know them in order to decide which one is a better option to improve the performance of the queries. Obviously this improvement means nothing for a small startup, but image the impact it has in a company as Google.
I think the point is that a lot of interviewers make those questions because big companies do them. But there is a reason why they do them. Obviously I would prefer the candidate that knows the answer over another one that also does the job but doesn't know the answer.
How big is the chance that you select the candidate who happens to know all solutions (e.g. by learning them by heart recently for the dozens of interviews he's planning on doing), but is not a good technical fit? My estimate is: pretty high; Let me explain why.
If you indeed need someone who knows about B* vs B+ trees, why not ask him about that separately ("explain me the difference between ..."), to see if he has the technical background you need? Even if someone understands that difference, that same person might have problems getting DFS right during the whole live interview, for a number of completely irrelevant reasons (nervousness, a momentary lapse, being a bit "rusty", a.s.f.). For the candidate to know the difference between a B+ and B* index/search and to operate a database correctly doesn't require the ability to implement DFS flawlessly (and most likely will never require the person taking a shot at that, for that matter...)
I think the real problem is that some people don't seem to understand the goal of these questions. If you do algo whiteboard questions (and you should!), you should measure the candidate's behavior, reactions, and analytical skills, while attempting a solution with you - and not to find out if the provided solution is correct or not (indeed, its often more interesting when the solution is not and you work with the candidate on locating the issue!). And all the while, the interviewer can try to figure out if he/she wants to work with that person, given the current interaction, too.
I disagree with you in a lot of things. I don't like whiteboards because they are stressful and feels strange if you don't use them regularly. I was a teacher which uses whiteboards daily and they are completely different to use a computer. I would rather ask the candidate to talk about one of his projects and start asking relevant questions related to the job position and applied to the project he knows. And have a discussion this way. And the iteration with the candidate is only biasing your decision towards his personality and not his skills. To know if he does the things like I would do them.
But that is my opinion, the person who is hiring is the one who decides who he wants to hire. I feel more confident in finding a good fit/candidate my way that how you described.
"Perhaps you won't, but since we don't know yet exactly what you'll be working on, we might know broadly what PA or even project, but we can't know what problems you will encounter or what direction it will take to debug any problems that arise, we want someone with a broad base of skills who can at the very least recognize performance problems and solve them in a simple case. We expect that if you can solve this relatively simple problem in an environment with no resources, that with the aide of documentation, profiling tools, and teammates to lean on, you'll be able to address much more complex issues that arise. On the other hand, if it takes you documentation and teammates to solve this simple case, who knows what kinds of tools it will take you to solve real world problems that arise."
"Correlation does not imply causation" doesn't imply that correlation never implies causation.
> If any company were to quiz me on algorithmic basics, it had better explain to me beforehand why it is among the x% of all hiring companies that actually need to roll their own new solutions in the face of so many well-established libraries.
Please suggest a reasonable alternative then. I need to interview people and see if they are going to be capable of digging through complicated code, of coding things reasonably quickly, or writing scalable, robust code, of potentially digging into things enough to optimize their performance, etc. as reasonably well as I can in an hour. And I need enough concrete evidence that everyone else in the debrief believes me. I would love to change the way I do things, but I am held accountable for the interview, so I can't just show up to the debriefs saying crap like, "He says he can use a library for anything that comes up." And I need to evaluate some other soft skills type stuff, like caring about the customer, communicating well, delivering more if you see something extra that you think should be done, etc.
But the obvious answer to your question would be, "because if everything we do could be solved just by using a library, we'd have interns or offshore people do it for a fraction of the pay."
As prirun says, you have a conversation, not a quiz.
Quizzes are a terrible way to assess skill; they're used in academics because they're, sadly, the only quick way to get some sort of consistent feeling over the memory and information retention in a group of dozens of students. No intelligent company should be copying this; companies have the luxury of dedicating significant time to exploring the potential of each candidate.
You say "Hey, here's a real problem that we'd need to solve. How would you go about it?" And let them talk. They'll be able to speak in informed terms about how to approach the problem even if their algorithms memory is rusty or if they're self-taught. If it sounds like they're on the right track, you win, if not, you move on. Repeat until end of interview, evaluate at the end.
I've used a basic code competency test for an interview pre-screen before. This is a take-home project that should take a competent person about 30 minutes, 2 hours if they go all out and add a bunch of bells and whistles.
People say this, that their take-homes are non-intrusive, but don't mean it. I mean it. The quiz is seeing if they can make a single API call to a prominent API and return the results according to a very simple format.
These pre-screens should be well below the competency requirement, which can only be assessed in the interview. It's essentially a fizzbuzz that they can't easily copy and paste by searching for "solutions to fizzbuzz".
Other than an extremely simple pre-screen like that, your decision should be based on their ability to reason, discuss, and solve problems on the fly, not how well they remember the particulars of compsci curriculum. That selects for the recency of their last class, which, ironically, is usually inversely correlated with their much more valuable real-world experience and what people think they're trying to test for.
This doesn't apply if you actually are doing deeply theoretical work on the cutting edge of compsci that may require frequent cooperation with academics. But that's a tiny minority of positions.
I love this; it's a good insight into what a hiring company wants to know about a potential employee. So how do you get there? Here's my suggestion:
There are 2 very distinct (in my mind) kinds of skills in question. For the "soft skills", a manager-type could and maybe should do that part of the interview. A trusted tech person should do the technical part. I'd go so far to say to that every technical person on the staff should be trained / groomed to help with interviews. So maybe you have 1 tech person do the tech interview, and a 2nd is learning how to interview.
For the interview itself, the process I'd use is to ask a lot of questions. For the soft interview, things like:
- "Tell me about a difficult customer you've had." Let them explain a while, and ask lots of probing questions: "Why didn't you do X instead?", etc.
- "Tell me about a difficult problem you had with a former employer." Same drill.
For the tech part, bring some code to the interview, ask the interviewee to bring some code to discuss. Another option would be to just throw them into a source code tree and say something like "Here's a tree of one of our projects. Talk to me about it." Maybe I'm wrong, but I think I could tell a lot about a tech person just from how they approached being thrown into a mess with no prior information. After all, that's a large part of being a good developer IMO. As they start to figure things out, look at code, etc., ask them questions about what they're seeing. Let them ask too. If you run into something interesting, like a specialized algorithm, ask them about it: "Why do you think it was done this way? What would some other options be?"
Then let them do these things for code they have previously written. For example, I wrote a Prime minicomputer emulator. I'd love an interview where they asked me about why I did this project, what problems I ran into, what were some alternative designs for tricky problems, what were the tricky problems, what did I learn from doing this, what tools did I use to do it, etc.
>Another option would be to just throw them into a source code tree and say something like "Here's a tree of one of our projects. Talk to me about it."
"Its in perl, I don't know perl, and especially not your internally modified version of custom-magic perl that Steve wrote 7 years ago."
Not to mention that you are now either showing source code trees to random potential hires, or you have to audit/create/otherwise use some potential set of source code. Maybe you prescreen by asking them their favorite language, and you come in with an open source project, in their language of choice, but now you have to have one of your devs spend time familiarizing themselves with Redis or the Python interpreter or Hibernate Core or Angular or whatever, and what happens when they ask to do the interview in Haskell?
FWIW, I know some companies that do the interviews you're describing, but they're all relatively small (<100 employees), and they all do that kind of interview only after a technical phone screen with your conventional questions, because the time investment required by the company is so great.
I've worked on projects where people role their own solutions instead of being able to use stuff that's out there and it's not great either.
The real solution I guess is talking to people and teasing this info out of them, and perhaps showing them some older code you have since fixed and seeing what they find in it.
Edit - one reason I personally try and use a lot of libraries is so that large parts of a project are maintained upstream after I leave my contract; the more I write into the project, the more the next dev will have to maintain, anything I can push back into external libraries is a win.
In fact, it should be explained at the job offer text. With an added bonus that people then can self select in or out of the offer without losing any further time.
No, it most definitely is not "fine". Even momentarily hesitating on question like "Would you use BFS or DFS in this case?" is more than enough to merit a quick transition to the "Do you have any questions for me?" phase, in many a modern, "As hire As, Bs hire Cs, you know" interview session, these days.
As an engineer you should be able to see the global picture and know other things. Because you wont be able to use something to solve your problem if you don't know it in advance. I mean, you don't need to know the details, but you need to know how things works.
For example, you might not need an AVL tree in your daily job, but if one day you need it to use it, you wont be able to notice if you don't know what is an AVL tree and what is its advantages over other trees.
Another example, you don't need to know http1 and http2 differences for your daily work. But if you know them you will change how you do web pages and you will see a leap in terms of performance and scability. And that knowledge also includes some knowledge about TCP, UDP, cache and other stuff.
The amount of things we need to know is not commensurate with salaries for companies that lazily copy someone else's interview. Most of us are not practitioners that get to dictate a lot of the technical decisions. We're just told we need to know what an AVL tree is before we can work on CRUD app #4272095.
If you work really hard, learn all of this stuff, and do a really good job and save your company a bunch of trouble by knowing all of this, you won't get a dime for it, and that's the problem. You get a pat on the back and you feel slightly less like an imposter for a few days.
People are arguing that we should offer quality for free. This is what open source is for.
I would argue security is more important than performance knowledge, but there is virtually zero focus on this that I've heard from the technical interview rabble-rousing that goes on in these threads.
> If you work really hard, learn all of this stuff, and do a really good job and save your company a bunch of trouble by knowing all of this, you won't get a dime for it, and that's the problem
I think that is your problem. I don't stay in companies where I don't feel valued. And anyway you are receiving a salary every month for your work. If you think you should get more, ask for more or move to another place.
For some people security is more important, for it is not. Because I believe that in the current society the companies can fall in few years (Nokia, Canon, ...). My security lies in my knowledge and my skills, and that is something I take with me whereever I am. I recall something I read, it was like this: "A bird is not scared of a branch to break, because it lies his confidence in his skills to fly".
>My security lies in my knowledge and my skills, and that is something I take with me whereever I am.
Sorry, I was confusing, I meant network and information security. Using a good password hash/library, not trusting user input, knowing some basic attacks or stuff from OWASP Top 10 and how to reproduce them. That stuff is still a problem. I'll admit I probably know less than I should because it's at the fore-front of my mind and I don't practice often, but it just seems weird that a company would prefer performance over secure coding.
Oh I missunderstood, sorry. I think everything is important, the more you know the better you can do your job. Probably maintenance comes before security, and security before performance. But it depends on the case.
... but if one day you need it to use it, you wont be able to notice if you don't know what is an AVL tree and what is its advantages over other trees.
No, what would happen is you'd say to yourself "Hmm, looks like I need a self-balancing tree. Haven't thought about those in N years..." and do a few seconds of keyword searching. The idea that people will be utterly helpless on their jobs without instantaneous photographic recall† of AVL tress (and A-star, and all the other crap people are bullied into memorizing these days) just doesn't hold water.
† Which, as we know, is the only level of recall acceptable in the modern interview process. Even momentarily hesitating in your recall of certain definitions will easily get you flushed by some interviewers.
The research you have to do is based on your previous knowledge. The less you know the more you need to learn when you are facing a problem you don't know how to solve it. But people is lazy, if they know a way to solve it they will go that way even if it is the worst way possible.
I am not talking about implementing an AVL tree without any help or resource. That is simple stupid, we have internet we must use it and save time. My point is that you need to know that there are balanced trees, know some of them and they characteristics. That knowledge will save you time when you face problems related to trees. You are not going to know all the trees because researches are working on new ones every day, but you should have some knowledge in the area.
If you interviewer drop you because you hesitate in something, maybe it is the interviewer problem and not yours. We are not perfect, we should know people will make mistakes and hesitate. I was rejected of interviews when I hesitate when they asked something about linked lists. I was happy it happened, because I was in shock when I was explanning how I did something with and A* algorithm and a octree and I couldn't believe the next question was about a linked list. I wouldn't be happy in that company.
> No, what would happen is you'd say to yourself "Hmm, looks like I need a self-balancing tree. Haven't thought about those in N years..."
You're attacking a strawman which has little to do with the original post or what the other commenter is saying. There's a lot of developers out there who lack the knowledge to ask the question you pose in the first place, so wouldn't end up doing that searching. The type of interviewing you're criticizing is intended to identify developers lacking the kind of base knowledge needed as a foundation for further inquiry.
Knowing when to use an AVL tree is orthogonal to being able to write one under pressure on a whiteboard. But as long as someone knows that different trees have different performance characteristics, I care more about how they decided what to optimize and why they recommend a lookup-optimized tree over a cache.
No, we should learn the basic algorithms and data structures (and their performance and storage characteristics) because they're the building blocks of everything else we use. You're using some API? OK, cool. Knowing some about its internal workings means that when you've got some wonky performance issue, you've got a basis to start reasoning from, to find and fix the cause of the problem.
Never once has knowing about the performance of linked lists or hashmaps been necessary to fix an API issue. 99% of your API issues will be "this doesn't work for X reason", not "this API response is slow because weren't using X instead of Y data structure".
You sound like you've worked with significantly different code than I have. Your "never once" is my "almost always", and my "X reason" is often because I'm working with an immature, unreleased library developed by another team at my employer.
What's more likely: that everyone needs to know intimate details about basic data structures or that the world is held together with duct tape and Perl and 95% of code is written by people not in Silicon Valley who need it to just work?
I think you are looking at this wrong. I have zero problems with someone presenting themselves this way and would definitely consider hiring the person.
Here's reality: Someone who knows their stuff is able to dive into details of their work in a way that an impostor can't.
My response to the above would be to have the person bring in some of their work and take an hour or two to take a deep dive into it. I want to see code, documentation, examples of trade-offs and a discussion of the reasoning, challenges, what could be made better, what should not be touched and why, project history, etc.
A conversation with someone who knows what they are doing and is very actively involved in their work is very different from a conversation with someone who might be trying to bullshit you or simply doesn't know enough.
I hate puzzles. All I learn from them as an employer is that someone might have devoted a month to memorizing a whole bunch of them for the interview.
I would imagine that at the scale of a company like Google, resorting to puzzles as a first filter might be an inevitable reality. If you have to interview people en masse you almost have no choice. It's like Stanford having to filter through 40,000 applications a year to accept 2,000 students. You have no choice but to go algorithmic on that problem.
The problem with your approach is that it excludes anyone who spends most of their time writing code for a business. I legally can't provide you with the code that I have written over the last several years. You're limiting your applicant pool to people who have been paid to work on open source, freelance web developers, and people with very little life outside of work.
I agree that puzzles aren't all that great of a measure of ability, but at least anyone can do them without writing about thorny legal issues over IP or spending all of their free time on spare projects.
Not entirely sure how you might have reached that conclusion. Anyone dedicated to their craft will naturally --out of sheer interest-- devote time and effort to getting better at it.
For example, as a young EE I was constantly reading data books (yes, physical data books) and application notes. I had hundreds of data books and probably went through all of them twice and some several times. I could, at the time, talk about almost any chip from any of the major manufacturers and knew where relevant application notes existed for most problems. My employers did not mandate that at all. I was truly interested in what I was doing.
Someone interviewing me at the time would have learned a heck of a lot more about me if they asked me something as simple as "Can you tell me about a few interesting chips and how you would use them?" rather than asking me to design a low pass filter with a given frequency response using a specific op-amp.
The deep dive I am talking about does not require anyone disclosing code done for their existing employer. Nobody wants that.
Frankly, I would also want to know about life outside of work. However, given our laws you have to be very careful about how you might probe for such information. I feel very strongly that our legal system has, to one degree or another, sapped all humanity out of our work life. I've worked in other cultures where it is perfectly normal for people to greet each other with a kiss on the cheek and a hug or pat on the back in the morning. In the US almost any physical contact can land an employer in court and a manager in serious legal trouble. But I digress.
So here's the thing: not everyone is you, not everyone is passionate about technology, and it's not reasonable to expect people to do more of their job outside of work for free. You might like, and that's great, but people have families and hobbies and interests that sometimes have nothing to do with the 40 hours they churn through for their bosses. I'm so sick of interviewing for companies who want "passionate" developers, which translates to "doing more work for free".
If you're curious about why we have workplace harassment laws, talk to basically any woman with a job and then rethink your sweet nostalgia.
> not everyone is you, not everyone is passionate about technology, and it's not reasonable to expect people to do more of their job outside of work for free
I am not sure how you would twist personal development and remaining up to date with "do more of their job outside of work for free".
Let's get away from technology for a moment.
You need to go to the doctor.
Would you rather go to a doctor who is passionate about their work and constantly devoting personal time to remain up to date on the latest research, drugs, studies, techniques and technology?
Or would you rather go to a doctor who has not devoted a single day in ten years to stay up to date?
I'd pick option #1 every time. Same with engineers.
Look, if you are not interested in remaining current, that's OK. Just don't expect to have access to the same opportunities. It's as simple as that.
> talk to basically any woman with a job and then rethink your sweet nostalgia
Wow. Not sure how you made the jump to harassment there. Gender has nothing to do with it.
> I am not sure how you would twist personal development and remaining up to date with "do more of their job outside of work for free".
Well, unless you're doing personal development at work during the 40 hours (or so) that you're required to be there, then you're probably not getting paid for it. I mean, I'm willing to tinker on my own, but you can be a good developer without doing it (you can also be a bad developer in spite of doing that).
> Would you rather go to a doctor who is passionate about their work and constantly devoting personal time to remain up to date on the latest research, drugs, studies, techniques and technology?
I want to go to a reasonably friendly, competent doctor. I really don't care how passionate they are about being a doctor, because I don't view passion as a proxy for quality (because it isn't).
> I'd pick option #1 every time. Same with engineers.
So my friend is a brilliant chemical engineer, and yet she doesn't sit around designing plants at home on the weekends. No, she hangs out with friends, sings in a choir, watches TV shows, travels, etc. I don't particularly understand the tech industry's fetish with eating, sleeping, and breathing code, and it's a damn shame that we're pushing away really fantastic nine-to-five developers.
> Look, if you are not interested in remaining current, that's OK. Just don't expect to have access to the same opportunities. It's as simple as that.
So it turns out that most software isn't created in Silicon Valley, and more money is made by crufty old Java and Perl enterprise systems than the next sexy JavaScript framework (which is probably just a poor rehash of a computer science concept Alan Kay developed in the '60s).
> Wow. Not sure how you made the jump to harassment there. Gender has nothing to do with it.
Gender has nothing to do with it if you don't experience the kind of issues that women in the workplace face, sure. If you're a man, you're not particularly affected by issues that affect women.
> You're limiting your applicant pool to people...with very little life outside of work.
Imagine that.
If your company exists solely to write CRUD apps in whatever JavaScript framework came out in March, you don't need to spend hours doing a deep dive into someone's side project todo list app. Unless, of course, you're looking for someone who will work 16 hours a day, 6 days a week, for one tenth of one percent.
> My response to the above would be to have the person bring in some of their work and take an hour or two to take a deep dive into it.
So another interview and another day off work? Another set of interview approaches/questions that immediately eliminate people who don't spend their free time doing the same thing they do at work 8+ hours a day?
Here's reality: 100% of the technologies I work with today did not exist when I was in school. The only thing that has remained constant, useful and relevant is basic science, math, physics, etc.
What, then, make an engineer a good engineer in any domain? This applies to all aspects of engineering, from software to manufacturing engineering?
If I had to pick one thing I'd say their ability to remain current, learn and apply new unknown technologies through self study. Their flexibility and willingness to do so. Almost their drive to do so.
What I am interested in is someone who has the right approach and attitude for the job, an ability to solve problems creatively and a significant enough desire to learn. I am not looking for a robot that can memorize a hundred coding solutions in a month.
Part of the discrepancy here might lie in a difference in environment and stated goals. I have never worked in Silicon Valley but I have a feeling it is a dog-eat-dog mercenary environment where people are chewed-up and tossed out as quickly as others are hired.
If you are looking for quick hires and you are not looking at the idea of adding a team member for a long term relationship with the company and the goals of the business, yeah, sure, filter them through some quick "can you code this shit fast" puzzles and move on. Great! You are hired. Here's your desk. Here's your ankle chain. Now code away!
If, on the other hand, your industry and approach is such that you view people as a long term investment you really should not care about those skills. Anyone with decent ability can memorize coding problems. And it is worthless. What you are looking for is to bring someone on-board who will become a true asset for the business. I see it as almost short of looking for a partner. I don't care about memorized performance, I care about the ability to problem solve and the creativity, culture and thinking they can bring into the business.
These are two very different views. One is hiring cattle. The other is hiring people.
> If you are looking for quick hires and you are not looking at the idea of adding a team member
> filter them through some quick "can you code this shit fast" puzzles and move on. Great! You are hired. Here's your desk. Here's your ankle chain.
> One is hiring cattle. The other is hiring people.
You seem to be arguing against several points I never made. I took issue with your insinuation that you can just bring someone in for two hours and talk at length about some of their code that they bring it. That cuts out most of the hiring pool, which might be okay if you're Google, but for most companies isn't a smart move at all.
> What I am interested in is someone who has the right approach and attitude for the job, an ability to solve problems creatively and a significant enough desire to learn.
This has no correlation with programming as a hobby or having a wealth of freely reviewable code. Plenty of people love their job, are great at it, very creative, and fantastic technically and never write one line of code that isn't closed source or behind one or more NDAs. They go to work, do an amazing job, then go home after 8 hours and don't write any code or even touch a computer until the next work day.
If they don't have any code or project whatsoever they are able to discuss I have zero interest in interviewing them.
I have never said this is a universal formula for all to adopt. This is what I do. And it has worked very well for over thirty years across a range of engineering disciplines. Google and others can't take this approach because they need to hire people by the thousands. Not me.
Another interesting thing is to talk about someone's hobbies and passions outside of work. This is where you can learn so much about a person. People are passionate about one or more things and that is a reflection of their personality.
We do a lot of work in aerospace. There are very obvious legal barriers to disclosure there. Yet, I have always found that most engineers who are truly engaged with their craft have enough interesting things to talk about outside of work that you can really get a sense of who they are and how they will approach work.
An interesting example was when I needed someone to work on some Python code. I brought in people who had zero experience with Python. I could not care less about that. I wanted someone with some of the qualities I have already mentioned. I ended up hiring a programmer with lots of C++ experience and no Python chops at all.
The first three months were dedicated to taking a deep dive into Python while getting up to speed on the project. After that the focus shifted to the project itself. This person has been with me for many years and doing an amazing job with various technologies we didn't even know we would touch when I hired him.
I know this person can pick-up any technology we might need to utilize and do an excellent job of it.
The investment is in the person, not the technologies or the ability to memorize coding puzzles.
I was going to agree with you, 98% of my career has been "google for a library, then use or tweak". It's RARE we ever actually do anything "new". However there ARE companies that do, and every once in a while YOU may have to do something new. In those cases it's good to make sure you have a foundation to build on.
I have 4 books in the "The art of computer programming" series on my desk. They've been more or less decoration for several years, until I ran into a problem I couldn't google. Since I read them, I had an idea of where to start, and I used it as inspiration to craft a new solution tweaked from one of these foundational algorithms.
That said, not everyone is strong in algorithm development, and you can be a kick ass programmer without that skill. I'd only ask these questions if I needed someone with that skill on my team.
In that 2% of the cases where you have to come up with something new, I don't really think that knowing pretty much any of this would help. You are storing a lot of information that will probably never be used. On the other hand, knowing how to come up with the solution. Knowing where to ask, what books to read or what people to ask, seems like a more important skill.
You are OK with not knowing how to invent, innovate, push the envelope, etc because you don't need to in order to collect your paycheck? Seems sad! Where is your passion for the craft?
You misunderstood me. Of course I love to invent and innovate. That's why I don't like memorizing solution to coding challenges. But just because I don't know them by hearth doesn't mean I don't know where to find the solution. Which is the more important skill.
If you don't know what is the difference between O(n) and O(n^2) and how to make a data structure that allows you to keep the data sorted in particular way, you can get by using ready-made things up to some point. When that point comes, your "where to ask" would be "take a course in algorithms design and data structures and/or read a fat and quite dense book". Which is a completely OK way to advance your knowledge - except that some employers may prefer to hire people that already did that, and not wait with the task at hand until they do.
Even if something represents 2% of your cases, if no one at a company or on the team has algorithmic problem solving skills it will represent more than 2% of your development time.
I tend to place engineers in one of two buckets-- those who make tools and those who use the tools. Most people fall into the latter bucket and can get by without hardcore CS knowledge.
Googling for a library is all well and good, but if you can't evaluate whether it's well designed and well written then you might as well rephrase it as "Google for someone else's problems to add to my own."
The background knowledge to understand what you're looking for and looking at is really important in these cases. You mention having and having read TAOCP, that already puts you ahead of >99.99% of the people doing programming out there - 95% of whom couldn't tell you who Knuth is, what he's written, or what Drofnats refers to.
I agree that these interviews are often just an annoying rite of passage, and they exclude many very talented programmers, but dismissing companies that use them as 'incompetent' seems like a stretch to me.
Learning basic data structures & algorithms is an immensely useful thing for a programmer, and completely essential in many cases. The best of these puzzles are based on problems which people have had to solve in the real world.
For the companies, there are many benefits to conducting algorithms interviews:
- setting a minimum standard to make sure there is a shared language and knowledge that you can expect any engineer in company to know.
- making sure you are able to do more than trivial optimisation and go beyond the abstractions that libraries / frameworks provide
- giving you a simple problem to solve in 30 minutes to see if you can program at all.
It's just laziness incarnate. This pushes all the investment of the first phase of interviewing someone onto an automated process and denies the candidate the opportunity to vet the company which is just as important as the reverse.
Well, actually they do allow the candidate to vet the company: the message they send is we don't care about you at all until you do a bunch of busywork and if you're very lucky we might allow a human to spend some cycles on reviewing your results.
If as a company that is the kind of message you would want your prospective employees to have that's fine with me but it would be good to remember that interviewing a candidate is a two way street.
How are companies supposed to evaluate candidates without giving them some form of busywork? The best way I can think of is to pay them to complete a project but that's not possible when you are interviewing loads of candidates.
Introduce them to the team they will be working with, have them do a code review or take a ticket and find their way through docs and discuss a potential solution.
In general teams are a much better judge of talent and ability than recruiters or your typical interviewer, especially in a normal work setting.
That's really expensive for the team in terms of time investment for possibly no payoff. Someone has to make sure that a candidate that gets to that point has a good chance of being hired. That person is "your typical interviewer."
Your typical interviewer, if he or she does not have relevant knowledge is just as likely to throw out the baby with the bathwater as they are to select the right candidates. There is no reason not to have the team do the pre-selection. I know this is all terrible news for recruiters and HR people alike but really there is nobody better qualified to determine who they want to work with on a particular problem than the existing team. The only situation where you would be better off with other people making that decision is when there is no team yet.
One thing I used to totally loathe during my brief stint as a programmer employed at a large organization is that when new people showed up that were already hired it was then up to us on the floor to make the best of a whole series of bad decisions preceding that moment.
So, let's involve the team in the messaging and pre-selection as well as giving them the final say.
Think about it this way: if you believe that the team you employ is the best possible group to do the work, don't you feel they are also the best possible group to determine how to expand the group?
> Your typical interviewer, if he or she does not have relevant knowledge is just as likely to throw out the baby with the bathwater as they are to select the right candidates. There is no reason not to have the team do the pre-selection.
What are you talking about? The interviewers are almost always from the team.
> Think about it this way: if you believe that the team you employ is the best possible group to do the work, don't you feel they are also the best possible group to determine how to expand the group?
Yes, and that's why most companies have their devs do interviewing.
It would definitely take up some team resources, but it's not more complex than the work they're already doing. So 'how would you do that' is the wrong question. The question should be 'how would they do that?' and the answer I don't know but I'm sure if you ask a team they'll be more than happy to explain, after all it is their (and not my) future that is at stake and given the fact that they are involved they'll do the best possible job to make sure they have to go through it the minimum number of times with the largest chance of success.
This is still far less effort than a wrong hire would cause. Anyway, I can see that my methods are not acceptable (yet), maybe in another decade or so?
Trust is hard. Even companies that trust their tech people with the corporate crown jewels still have a hard to impossible time trusting them with such everyday decisions such as who they want to work with. It's counterproductive to say the least but that's how we've been doing it for the last 40 years, so I don't expect any major changes in the near future.
Fair enough. I think without coding interviews networking would become even more important than it is now, and it is already very important now. Moving more towards a "networking" world would mean things are less a meritocracy. It would not be about what you know, but who you know. Personally I find that unappealing because it doesn't seem fair, but I know some people aren't interested in "fair" anyways.
Instead of people complaining about coding interviews on HN, you would have a lot of programmers complaining that they are introverts who are just good at their job, and don't think that they should be punished for not being a people person.
I don't think teams should be penalized for not hiring people that aren't team players. There are good spots for introverts in IT but teams usually (though not always, I've seen some interesting exceptions to this rule) are not too welcoming to that sort of person.
If the world were gamified to the point that your skills are all that matters then yes, a solely merit based approach would work. But in the world we live in today people skills matter (a lot, actually). On a personal note, this was a very hard lesson for me to absorb, the first years of my career introvert would have been too friendly a description, anti-social probably would have been a better one. But over time I got a bit better at working with others.
Not really. Quite a bit of software is like a powertool and I certainly would not want to label the users of powertools as 'lazy'.
But these companies are not using software as a powertool, they are using software in a way that attempts to deny their counterparty a mutual investment in the relationship. And that to me is lazy.
I'm a lowly c++ software engineer. You might not have to deal with that stuff, but for me it was Tuesday.
Kidding aside, someone has to build those tested, stable libraries that handle those problem (or even untested bleeding-edge if you are breaking new ground).
I primarily write python web-based APIs for a web application + 2 mobile apps. Just the other day, I was dealing with an endpoint that had to update hierarchical data (i.e. a collection of trees).
Due to the circumstances, normalization wasn't an efficient option. I ended up throwing together a barebones tree with a 5-line DFS implementation to traverse it. It handled inserts, updates and deletions (for my use-case) in linear time.
The details aren't so important as the fact that adding a dependency would have been overkill for my needs. This isn't to say that efficient graph implementation libraries should not exist or be used, but I was able to produce this code faster by having that basic CS knowledge.
And because your code was implemented in python (rather than use prebuilt libraries that call back to C) it was 100x slower than it should have been. Im all for knowing the fundamentals but there is a strong argument for knowing the right tool for the job.
re: the debate between Python being slower and C faster, it all depends on context. If the context is "this is going to be called multiple times for every transaction" then yeah, look into recoding it. If the context is "this is going to be called for this particular edge case and may execute 10 times a week and take an extra 3 seconds each time" then there are more productive places to put your energy.
At the level of programming that the grandparent is talking about, I'd accept the judgement of the programmer working on it as to the appropriate solution.
Where does one get a job writing actual algorithms and data structures? I actually enjoy that and am pretty good at it. I'm sick of jobs that are nothing but glueing together poorly documented and tested libraries.
Where does one get a job writing actual algorithms and data structures?
Two big areas that come to mind are simulation/mathematical modelling, where you're often crunching data in ways that aren't just textbook examples, and embedded systems, where you often have resource constraints that make efficiency more important.
This doesn't just mean modelling weather systems on supercomputers or writing the control software for cars, though. For example, consider user interfaces. We are increasingly looking for more intuitive input methods using techniques like natural language processing, speech recognition, handwriting recognition, and gesture-based UIs. We are looking for more intuitive output methods, such as integrating additional data with real world imagery like maps or the view through a 3D head set or camera. We are looking for systems that learn patterns in their users' behaviour and adapt to provide more likely options more quickly next time.
You won't see much of this if you're just writing simple form-based web front-ends for CRUD applications. A lot of real world software is like that, and it gets a lot of useful work done, but it's mostly pretty mundane, join-the-dots work as far as the programming goes. However, there are plenty of interesting problems out there and we could directly improve the user's experience in new and helpful ways if we could solve them, and much of that work involve developing data structures and algorithms far beyond anything you'd find in an introductory textbook.
Data Engineering, SRE, Production Engineering are really good for that kind of thing. Especially at a larger company, but the truth is those opportunities aren't going to come up that often, you don't want to be continuously inventing your own technology unless you're living on the bleeding edge like Google.
Lots of places, but I would guess finance and gaming have it at the highest percentage. They have the strictest performance requirements so algorithms and data structures end up custom built for the job more often.
This is very true, but I don't think that anyone interviewing for a position like yours will reference to this list of things to test their candidates.
Honestly, is it unreasonable to require that people brush up on this stuff every couple of years? In my experience the majority of companies just want you to be able to do fizz buzz level whiteboarding and intelligently speak to your experience. I feel like we all know in advance which companies typically require a month long review of algorithms before the interview. IF you want to work for one of them then do what you need to do to get the job there. We all agree it's annoying but I really don't think it's as big a deal as people make it out to be.
You can conceivably move up to management, never have to deal with algorithm hazing again, and make more than the guy that has to refresh every few years.
The rewards just do not add up for this to remain an industry practice.
But the innovative businesses that develop genuinely new technologies also hire "the guys that has to refresh every few years". Leaving aside the potential financial gains if you're in early enough and they have a big exit, those also tend to be interesting places to work.
Moving into management is essentially changing career, and for the kind of person who actually enjoys programming and wants to do something creative and technical, there's no reason to assume they would either enjoy the new role or be any good at it.
That sounds like a career change that I wouldn't enjoy, despite being able to also avoid some of my least-favorite aspects of staying on the dev side of things.
> I see these challenges as a great way for excellent experienced developers to weed out incompetent companies.
This is far too broad. There are plenty of jobs in certain companies where a good understanding of the theory and practice encapsulated by these challenges is the bare-minimum requirement for doing well. Companies that leverage coding challenges like these for these positions aren't incompetent (at least, not for that reason). Just because your work doesn't require this depth of understanding of CS doesn't mean there is no such work.
I'm skeptical, however, that the number of such jobs is very large, even in the "usual suspects" companies (Google, Amazon, etc.). Most jobs, even in these places, one can get by with the most rudimentary ability to understand what 'greater than' and 'lesser than' means and a chart describing time/space complexities of various structures and algorithms in a library.
It's not about actually having to implement these algorithms in your practical, day-to-day work. It's a challenge to test your reasoning and problem-solving ability in abstract, that you can administer in 15 minutes. You can't really test a candidate with real-world workloads, can you.
You absolutely can test a candidate with real-world workloads. It takes longer than 15 minutes, though.
I have no idea why anybody cares about the 15-minute thing. Each person you hire adds thousands of hours to your available labor. So even if I spend 100 hours finding the right candidate, I'm still way ahead. And the better my working environment is, the lower my turnover, in which case I can spend even more.
While I fully agree with you in that interviewing is broken, and it's broken because we throw away a lot of great people, you'll be surprised how much time a company spends finding the right candidate.
Imagine we spend an hour per candidate on screening, and we pass 10% of people, and we spend 6 hours on each on site,with 25% of them passing.Finally 50% of candidates accept the offer: This is very typical math, and it ends with over 100 hours per developer hired. For each hire, we also had a declined offer, 6 candidates rejected on site, and 72 failed screenings!!
Given those round, but not really all that far from reality numbers, any increase in screening time will spiral out of control unless some other multipliers change. The one that is most likely is that said company is giving offers to less than half of the people that would have been successful.
Those numbers also show why it's so important for a company to make competitive offers and woo candidates, and why they'd not like it when people interview at 6 places at once: Run the calculations just changing the acceptance rate down to 30% and up to 80%. If you are trying to recruit from a big name university, chances are that your candidate really is talking to a dozen serious SV companies and gets 4 serious offers.
Therefore, while I agree with your sentiment, you can probably see why it's so hard to convince someone running the traditional SV pipeline to make changes, as you are either raising costs or telling them that their current outcomes are a gigantic dumpster fire.
Anybody spending 6 hours per candidate on site can afford to test them on some real work. I'll typically do 2 hours for pair programming, and/or 1 hour for reviewing some existing code. 1-2 hours is also a good amount of time for a joint design session on some real problem.
You're definitely right that we're missing good people. I keep coming across people who didn't get the right job until they invested a bunch of time into learning to beat the Mensa-puzzle interview. So one way to make the numbers better is to find more good people.
I think if we're going to do the math, I also think we need to account for the cost of getting people who are not so good. If we test some thing that we hope correlates with doing the work, we're asking for large downstream costs. The more our interview tests what people actually need for long-term success, the better off we are.
How do you keep interviews consistent for candidate comparison with pair programming? I'm assuming you mean that y'all are pairing on actual work. That can vary. Yesterday was simple CRUD, "candidate nailed it." Today, there is an obscure concurrency bug, and the candidate would need more than 1-2 hours to understand the landscape of the complex code base we are asking them to pair in; "the candidate asked some ok questions I guess."
I fully agree with "the more our interview tests what people actually need for long-term success, the better off we are."
^^ This. As tokenadult always points out, a work sample test is the way to go.
I do it by pairing on a standard problem, one I'll use over and over. That can be real work in the domain or a toy problem. Both seem to work pretty well to me.
I'm pretty sure having candidates do unpaid work is illegal in California, which is another reason not to have them pair on actual work that you plan to ship.
You are often weeding through a large number of applicants, so spending 100 hours on each candidate is just not feasible, and, generally, not necessary. Plus, high quality, experienced developers are not going to want to spend 100+ hours doing real work for companies just to see if they can get a job.
I'm more than happy to spend an hour here or there for a phone screen or coding test to show that I understand the basics of data structures and algorithms and that I can use that information and my reasoning ability to solve problems I haven't seen before.
And how many candidates do you have to consider to find the right one? And what proportion of the interviewer's time is spent on the live coding interview vs setting it up and arranging it? I've been in this boat recently. It doesn't really take much to get to double or even triple digit hours if you don't automate something.
No, it's probably more like 4 1-hour interviews where they do whiteboard coding 2 - 4 times for 30 minutes and the rest of it discussing other previous experience, what they are looking, what the company's doing now and looking to do in the future, etc.
"reasoning and problem-solving ability in abstract"
Implementing a syntactically correct tic tac toe program on a whiteboard doesn't fit it. Plus many are comfortable in working behind the screen than to work in front of white board (real world)
So one big question all companies have is how to interview for programming positions by kids fresh out of school or who've had a job for at most a couple of years.
This is less a question of competence and more a reflection of the age structure of the market for programmers. You will be working with people younger and less experienced than you. You will probably be hiring people younger than you -- how will you interview them?
As someone with a couple decades professional experience, I see these challenges as one of many ways for competent companies to attempt to find competent programmers despite a lack of experience by the interviewee.
The last company I interviewed for gave me coding challenges, but that's not all they asked, I got plenty of questions that allowed my experience to shine. If you only got coding challenges as an experienced developer, then yes, that would be a reason to avoid that company.
On the flip side, my willingness to take the coding challenges in my interview allowed me to highlight my practical experience, because I crushed them with little preparation. Other experienced devs who refused the coding challenges or dragged their feet and complained about them lost the opportunity to receive an offer.
That's like an EE saying, I don't really understand capacitors, but I am building a circuit like this one and it has a capacitor, so I'll just borrow the values and tweak them in simulation.
>> That's like an EE saying, I don't really understand capacitors, but I am building a circuit like this one and it has a capacitor, so I'll just borrow the values and tweak them in simulation.
Dude, in EE interviews, we just ask them some basic about capacitor and how to use it. We don't ask them to derive the mathematical equations of electrolytic capacitors.
In fact, in most EE interviews, you just use them to solve ONE problem and be done. Nobody questions you know EE stuff once you've solved ONE problem.
In programming interviews, you have to solve MANY problems and interviewers just want to keep finding ways of docking points.
I've never applied for a strictly dev position, but my friends have told me they've been asked about how different forms of self-balancing trees work. I haven't implemented my own self-balancing tree since Data Structures, and I'm pretty sure they haven't either. If someone asked me to implement one in an interview I'd honestly think they were crazy. Who would implement a data structure or algorithm they haven't used in nearly a decade without looking an implementation up first?
Bearing in mind that I use my programming ability for analysis rather than developing applications, I'd say that programming is like 90% fundamentals and 10% looking stuff up. If a company really wants to test someone's programming ability in an interview, I feel like the best thing to do would be to make up a programming language, give them a reference sheet, and then ask them to program a couple different versions of fizz-bizz.
Obviously it's not a perfect idea, but I think you'd at least be testing the skills people actually use when programming, rather than whether they can remember every bit of syntax from every language they have listed on their resume. I mean, if I lied and said I knew javascript, I'd almost certainly fail a programming test that used a made-up language based on it.
> If a company really wants to test someone's programming ability in an interview, I feel like the best thing to do would be to make up a programming language, give them a reference sheet, and then ask them to program a couple different versions of fizz-bizz.
Are you joking? Doing fizz-buzz is way too low of a bar. That doesn't even show you can use standard classes and things, like maps and lists.
> but I think you'd at least be testing the skills people actually use when programming
That would tests very few of them, unfortunately.
> rather than whether they can remember every bit of syntax from every language they have listed on their resume
No one I see tests for syntax, but for the more important/broader concepts.
> if I lied and said I knew javascript, I'd almost certainly fail a programming test that used a made-up language based on it.
Probably not. Most mainstream languages have pretty similar syntax. And if you expected people to know others without warning, people would raise hell. Anyone who knows C can probably guess the gist of what a small snippet of non-tricky Javascript does.
Well, other than R, I've never programmed in a functional language before. I programmed Javascript in my High School intro to CS class back in 2004, but I can't imagine I could pass a test based on that alone.
Fizz Bizz is stupidly easy, but I honestly don't think that much of programming is hard in the first place. The hardest thing in my mind is designing a coherent program. You could add any requirements you'd like to your made-up language (no automatic garbage collect or reference counter, etc), but I think you'd be able to tell pretty easily if they were the real deal.
Sadly, that describes more than a few EEs and MEs that I've worked with in the past. Where the act of turning a key in a commercial software package starts to displace practical design considerations that they learned in the classroom.
You don't need to be able to build a capacitor from scratch in order to understand how they work. Furthermore, electrical engineers don't have to build capacitors from scratch during job interviews to prove their competence.
No, but there's a level of the EE tech stack where you do need to understand (and have the ability to build) the level below. I don't need to know: how to construct a transistor, build a logic gate, build a look-ahead adder, construct an ALU, CPU, computer hardware, write assembly, or write a compiler to do my job (although the last couple start getting close enough to my bailiwick that I think they're useful).
I'd expect that something basic in an EE job could be "draw the core part of an oscillator circuit, then we'll talk about the principles of its operation". The discussion would end up going into some properties of capacitors, why they chose that exact form of oscillator, expected use-cases, etc. The behavior of the object lower in the "stack" becomes important, and so does a real understanding of how they work. Of course, actually requiring them to build one would be ridiculous.
>>I'd expect that something basic in an EE job could be "draw the core part of an oscillator circuit, then we'll talk about the principles of its operation".
Sure, but that's the equivalent of drawing a diagram that explains how quick sort works, as opposed to implementing it using real code. Most companies demand the latter during interviews.
The places I've interviewed (a few big names and a few small ones) either wanted something pseudo-code-like (on the whiteboard), wanted an explanation of the algorithm (potentially with some clarifying diagrams or code), or actually provided me a computer with an IDE.
> I've never had to deal with one of these sorts of problems in my day to day work;
You are from different niche and actually you are being paid less than those guys who knows the stuff you don't
From [0]:
"[S]killed cloud and backend developers, as well as those who work in emerging technologies including Internet of Things, machine learning and augmented/virtual reality can make more money than frontend web and mobile developers whose skills have become more commoditized..."
A completely delusional comment, devoid of any roots in reality whatsoever, and an insult to the decency and dignity of software engineering, as well as an invitation to strip software engineering of the respect and compensation it deserves.
A 13 year old making a website for his dog in PHP can fit your definition of full stack web engineer.
The underlying parts of this "full" stack require significant domain knowledge around algorithms, data structures, computer architectures, operating systems, distributed systems, networking and communications, programming languages, etc... and most importantly, critical thinking and engineering rigor beyond trial and error and cargo cult copy-pasting from stackoverflow into your "get-things-done" duct-taped spaghetti code base.
Those underlying parts created by the people that you now call "incompetent" are required to design, implement and maintain the "kick-ass" babyproofed playground you live in and that allows you to put food on your table. Have some respect.
If you are so kick-ass and get-things-done, checkout the source code for Linux, Chromium, v8, node or libuv, Python, Ruby or whatever technology you use and try to get something done there to a level of quality in which it gets accepted and see what happens. You and your kick-ass denomination will be stomped over and brought back to reality.
The problem with your reasoning is that you expect the interview to mirror job requirements rather than select for job performance. In many cases the best instrument to measure the latter will resemble the former, but there's nothing intrinsic about the relationship. If giving someone a brain teaser or having them recite trivia provides a strong signal for job performance, then it makes sense to use these instruments. You could argue that they don't provide a meaningful signal (which I think Google may have discovered with the brain teasers), but that is a separate discussion.
Something else to consider is that the interview is optimized to select for true positives and reject false positives at different rates. It's been discussed elsewhere that for a company like Google avoiding false positives is much more important than finding good candidates. So it may be the case that some instruments like trivia recitation provide the right signal at the intersection of the optimization curves. The fact that many (even most) qualified candidates score poorly on these instruments doesn't impugn their utility; their primary goal isn't to identify good candidates but to filter out bad ones.
I suffer greatly in coding interview questions. I'm not a kick-ass full-stack engineer but rather a research engineer. I code every day but usually it is proof-of-concept demonstrations so I lack formal education / guidance of many professional coders. I'll knock a take home assignment out of the park but I do envy people who are great coders.
I agree 95%. However, inevitably there will be a developer that will want to implement a trie for reasons. It's hard to be able to reason with her (or technical leadership when appealing the decision) when advocating for an off the shelf alternative if you can't explain why this isn't a brand new problem. On the flip side, one actually does need to create a new data structure on occasion, and obviously there we would want to be able to implement the common alternatives.
Anyway, it's complicated. Especially when hiring for technical leadership.
If I need to convince another developer that a particular path is well-worn and that they are drifting towards NIH syndrome I have the benefit of time to develop an argument and resources with which to do so. I do not have 45 minutes, no resources, and just a whiteboard.
To pre-counter an expected objection, if your company makes significant decisions like that in a single 45-minute (or any length, really) meeting you need to change your design process, not your hiring process.
> I have the benefit of time to develop an argument and resources with which to do so.
One would assume so. That's not always the case in my experience. Improvisational discussion of design tradeoffs and costs happen a lot. YMMV, I guess.
Though I agree that no-google, closed-book, no-IDE whiteboard development is very unnatural.
> ...if your company makes significant decisions like that in a single 45-minute (or any length, really) meeting you need to change your design process, not your hiring process.
I'd agree with that, but design processes on the whole tend to more dysfunctional than organizations realize. There's still a lot stuff running on deprecated OSs, dead languages, and mountains of technical debt.
> One would assume so. That's not always the case in my experience. Improvisational discussion of design tradeoffs and costs happen a lot. YMMV, I guess.
They certainly do, but significant, unchangeable decisions should not be made that way. If I think there is a better alternative to something expressed in one of these discussions, but do not have the details at hand to make the case, I will voice the alternative and compile the details later. Choices like that should not be made without documented rationale anyway.
> I'd agree with that, but design processes on the whole tend to more dysfunctional than organizations realize. There's still a lot stuff running on deprecated OSs, dead languages, and mountains of technical debt.
The solution is to fix the organizational dysfunction. Hiring to the dysfunction is a band-aid at best.
In a way I prefer this to "how many ping pong balls can fit in a school bus" that was all the rage in the 90s and early 2000s. But... man...
I have a computer science degree, I've been coding for 20 years, and I've held (and kept) a CTO role at two mid-sized startup companies. Currently I'm considering looking for a job at a larger company (where I wouldn't be CTO but I'd be hopefully paid more) and these kind of questions make me think I need to almost go back to school before I can start interviewing.
When did it get to the point where we need to read thick books on programming interviews and spend hundreds of hours on HackerRank practicing algorithms to get a job where we probably won't use any of those day-to-day?
Edit:
For fun the other day I decided to code a linked-list from scratch in C and it took me a half-hour plus... and I have known that data structure for decades. I can't imagine doing a harder problem under the pressure of being an interview.
Keep in mind that, because the number of programmers is increasing, the majority of them are fresh out of school or close to it. Algorithms interviews are largely designed for interviewing people who don't have much experience, so they're skewed toward testing what people learned in school.
If after 2 CTO roles you interview for a job and they ask you to write a linked list algorithm, then would you want to work there? I mean you might, but that alone should at least beg the question. If they're not asking you how you'd manage a team of smart but stubborn coders or structure their company's software architecture and services to be more reliable, then they certainly aren't paying much attention to you or what your strengths and experience are. Or they simply don't know how to hire an experienced person, which signals that they don't know what to do with an experienced person once hired.
Also, it would take the vast majority of great coders at least a half hour to code a linked-list from scratch in C and get it right, depending on what you're doing.
If the number of unfilled positions is increasing, that would tend to lower the requirements, not raise them. If the number of filled positions is growing at the same rate as the number of unfilled positions, that would be evidence that the requirements are staying roughly the same on average. (Except I'm sure there some small amount of average improvement and widespread knowledge increase every year that everyone benefits from.)
In any case, it still makes sense for companies to come up with interview strategies for inexperienced devs, and the article at the top is once such way. It doesn't mean companies will interview experienced devs the same way, and in my experience, they don't.
(EDIT I had written some other junk based on me misreading your comment, apologies if you had to spend any time on that.)
I agree. To find out if one is actually competent, open questions that allow the candidate to elaborate on his previous problem solving experience are much more useful.
Some examples:
1) Explain how and why you would implement authentication on a HTTP API. What other authentication methods do you know and why would you not use them?
2) Tell us about a scaling-up challenge/problem you had on your last application and how you solved it.
3) What technology stack would you use for a browser-based P2P file sharing application?
4) Name a library you recently discovered. What problem did it solve?
5) What mobile app on your phone right now has a bad user experience in your opinion? How would you make it better?
Personal opinion but I think "how many ping pong balls can fit in a school bus" is not that bad.
I've asked HR people why they ask that question, and it was not to have a correct answer but to demonstrate your train of thought.
When I give a coding interview, I will use more development oriented questions but I want to see HOW they get to the answer, not the fact they memorized an algorithm that just so happened to be on the coding test.
If you've got that much experience and someone asks you how to impliment a linked list, I'd say you should politely leave the interview. Graduate questions like that are insulting once you've got past the junior levels, as companies asking them are either too lazy to put a proper interview together for more senior levels, or not senior enough to know the difference.
That's not to say they can't ask you about data structures, but they shouldn't be asking you to code them up from first principles if it's something the language can do for you.
I find it much more fruitful to have a technical chat with candidates instead of being dogmatic about asking toy questions. It's pretty obvious when someone doesn't know what a linked list is with a 30 second chat. Heck, even drawing a few boxes and arrows on a napkin will suffice.
I'm a C+/B- developer. I've been doing this stuff for touching two decades, and these questions gave me a prickly sweat down my back. There is a part of me that feels very lucky to have plopped onto the Earth right when I did, before there was an organized process to weed me out. I'm entrenched enough that interviews are generally just culture fit interviews. I think I'd need ulcer medicine to get a job with no reputation.
How often have you had to code up solutions to any of these challenges in the last two decades?
These sort of challenges are great for learning some of the theory behind computer science and getting you thinking, but they're not a useful part of the interview process imo.
At some jobs we make the libraries you use to solve those problems. These interview questions test very basic CS concepts and many of them are essentially just extremely watered down versions of real life problems. I can teach a dev how to Google for libraries, I can't teach him basic CS concepts. I would much rather hire for the latter.
In fact this whole thread makes me uncomfortable. I wouldn't want to work with anyone who gets nervous when told to reverse a linked list.
Sure, but those jobs are few and far between. If you are indeed writing that sort of code, then it makes complete sense to test people on that, but the vast majority of jobs using this style of testing are not.
Even for other jobs, using this style of interviews reduces your false-positive rate, because any style of interviewing thats good enough for the sort of software that this tests for is going to be good enough for them.
Consider trying Codewars. Their kata problems are a mix of algorithms but also things like standard library mastery or functional techniques. Personally I find it a lot more enjoyable than "interview style" algorithms questions.
Semi off-topic, but i am curious. How much of CS fundamentals do you expect a backend (or fullstack) developer to recite in an job interview? I did all CS theory stuff many years ago at university, but i could not pass a interview test full with these CS basics.
Isn't it far more important to know how to design a modern, maintainable, scalable web application? Know how an when to cache stuff. How to design a API. How to integrate with third party APIs. How to setup logging. How to sanitize data and other common security problems. How to setup test/build/deploy structures. Et cetera, you get the point.
If, while building a web application, i encounter a "big data"/"data processing" problem, i will find a appropriate solution using CS best practices. But hell, i would struggle to implement a simple array sorting algorithm under pressure.
> Isn't it far more important to know how to design a modern, maintainable, scalable web application? Know how an when to cache stuff. How to design a API.
How do you do that without some basic understanding of computer science-y stuff?
How do you define "scalable", how do you measure it? How can you have some intuition about a design before we spend 3 months and many sprints building it first?
How do I know when to cache stuff? Does it matter if I have calls to a remote cache in a tight loop? Should I be using an in-process, out-of-process, or remote cache for a particular piece of data?
Here's one that comes up A LOT with junior and mid-level developers: floating point numbers aren't magically precise things.
There's a balance here, somewhere. I personally would never use the trivia questions in the OP. But the idea that you don't need to know even basic computer science-y type things, seems crazy to me.
I can't see myself hiring a computer programmer who is offended by being expected to have a casual acquaintance with computer science.
> How do you do that without some basic understanding of computer science-y stuff?
> How do you define "scalable", how do you measure it? How can you have some intuition about a design before we spend 3 months and many sprints building it first?
> How do I know when to cache stuff? Does it matter if I have calls to a remote cache in a tight loop? Should I be using an in-process, out-of-process, or remote cache for a particular piece of data?
You're proving the above poster's exact point. You are putting your weight in applied questions that rest upon the developer's specific experience. This method is the opposite of evaluating people for their ability to memorize a half dozen algorithms and data structures.
In my experience interviewing candidates, asking people to implement a caching algorithm is a distraction to both parties. A much better evaluation is their ability to provide box-arrow diagram and talk it through. This is much more effective towards understanding their thought processes and knowledge. It is also much, much closer to the _real_ day to day of a today's engineer: communication, advocacy, and breadth of knowledge. Code is cheap. Business should screen employees for an interest.
CS textbook questions introduce enormous amounts of bias, especially in panel interviews. It is a dangerous trap that companies use to further entrench their team cliquiness and departmental monoculture. It is ripe for Simple Sabotage. Simply put, its lazy.
> How do you do that without some basic understanding of computer science-y stuff?
What do you mean by "understand"? In some ways your question is like asking an engineer how he can choose a particular bolt for use in, say, an elevator, without understanding the basics of how the failure characteristics of the bolt are known. There's some relationship there, but is it really an important one? Sometimes it may be. Other times (most times?) probably not.
> How do you define "scalable", how do you measure it? How can you have some intuition about a design before we spend 3 months and many sprints building it first?
"Scalable" is entirely dependent on the business' needs and plans, and measuring it certainly doesn't require any formal understanding of computer science. This is fairly basic arithmetic. As for the design intuition: computer science-y things are only so useful as far as that goes. Their real utility is in optimization, not designing ahead of time.
> I can't see myself hiring a computer programmer who is offended by being expected to have a casual acquaintance with computer science.
Well, as you yourself noted, questions of this sort aren't for "casual acquaintance" levels of knowledge. These kinds of exercises are basically things an undergrad (or, in some cases, graduate) student thinks are important because they're all he knows.
Incidentally, the fact that interviews focus so much on these kinds of "CS fundamentals" is one reason I chuckle whenever somebody tries to describe these companies' activities as "engineering." Any engineering that goes on in places that put much weight to these kinds of questions, and which isn't interviewing candidates specifically to engage in academic, or close-to-research CS, day-to-day, is usually by accident.
What matters is how much the company you want to work for expects you to whiteboard or code to pass their screens.
Big company? You almost certainly need to have your CS challenge question toolbox full.
Smaller company? Many follow the big company interview methodology, but some don't, so more likely you can get away with not breaking the spine on your Cracking the Coding Interview.
As a fullstack developer most of these things are not needed. Having a good understanding of what is optimal is good, but you shouldn't have to be able to recite all the big O notation.
Funnily enough none of the tests I've had ever talk about relevant things like data normalisation / de-normalisation or anything that will be used in the project.
Never mind the specifics of the selection process, Silicon Valley's risk-adverse approach to hiring is baffling. Instead of making candidates go through an overly rigorous process, why not embrace California's at-will status and hire and fire often? Employers and employees already view each other with suspicion in terms of loyalty, the former willing to downsize capriciously at any moment, and the latter willing to jump ship every year or two. So accept the mobility and instability of startups. Instead of spending immense resources on the interview process to find the perfect, you focus on getting the good and terminate swiftly if they turn out to be bad.
Not to mention, even when you have these types of interviews it still doesn't seem to filter out toxic people, such as the harassers of Susan Fowler at Uber.
Of course, it would be easier for workers to be okay with this arrangement if basic needs such as health insurance were decoupled from one's employer. That's another argument for single-payer in CA.
>>Never mind the specifics of the selection process, Silicon Valley's risk-adverse approach to hiring is baffling.
I think it makes perfect sense. A bad developer hire is not just unproductive. They also tend to be a burden on their coworkers and reduce their productivity by asking trivial questions and writing buggy and/or hard to maintain code.
With startups the risk is even greater. Think about it: the odds are already stacked against you. Why would you take even more risk by randomly hiring developers? Sure, you can fire them, but by then it may be too late.
I've read that rationale, and it makes sense, but it almost makes it sound like the organization is humoring a bad dev without oversight for months upon months until they suddenly realize their inability. Maybe they should spend more time actually running their company instead of interviewing candidates all day.
The situation now is not better. People have cargo-culted a strategy that is based entirely on having enough applicants that you can afford to have a stupid false-negative rate. It's also easily gamed, so you have to keep trying to find new puzzles for people to solve.
Good developers won't tolerate a contract-to-hire offer or will sniff out that they're a fire-fast environment. That, and it's pretty terrible for employee morale.
I'm not suggesting contract-to-hire, so much as a formalized and transparent concept of probationary periods. I'm pretty sure at-will employment means you don't need to bother with contract-to-hire unless you want to be cheap with paying your employees benefits. So these employees shouldn't be on contract anyway unless expressly temporary.
It certainly would be terrible for morale if implemented thoughtlessly. But if it was done in a way with clear expectations of employee performance, then terminations would not be arbitrary. And the morale hit of being terminated at any time- a situation with legally exists in California atm anyway- would be mitigated by more companies being quicker to hire instead of forcing long arduous interview processes.
So then the situation becomes "If mgmt. is just, then they will keep me on, if they are dumb, then I can get quickly hired across the street anyway."
Sure, the end state of "everyone hires quickly and fires quickly" is better, but there's an intermediate state where only a few companies use the new paradigm. In that one, the "can get quickly hired across the street anyway" isn't true. And the people who get fired are going to spin narratives about how getting let go was management being stupid and untrustworthy, regardless of how true that is, so you'll wind up generating noise that scares off developers.
> why not embrace California's at-will status and hire and fire often?
This sounds like an utterly miserable experience for both new hires and coworkers of the new hire. Do you really want your company's onboarding process look like FNG syndrome?
Well I can confidently say that people I know don't have a problem with the current algorithmic interview process, in fact my friend group generally finds it fun, where as conversely I don't know anybody who would take a job with a non-insignificant chance of being fired.
That's fair, and certainly there are plenty of engineers who are content with the current state of tech interviews, but there also seem to be many who aren't, given that this conversation is taking place on a comments thread full of criticisms of the tech interview process, and these threads seem to manifest at least once or twice a week on HN.
There is a cost involved with onboarding/offboarding employees which is a common reason why this tactic isn't used. Between the time spend spooling someone up on your internal processes, completing required HR tasks, requisitioning equipment, etc it often makes it a pain to go with the hire/fire quickly method. I'm sure you could build your business around it, but at that point you might need to look at the moral of the company.
Also, if you fire someone in California they will likely file for unemployment which means your UI reserve account requirements might go up.
Indeed. A couple of weeks ago I had a recruiter from Atlassian send me an email saying they were interested an could I complete these 6 problems on Hacker Rank and each challenge has a time limit.
Its not enough to outwit a Burmese Python but you need to do it while a clock is ticking.
We all know the best performance and indicator of skill always comes from coding with an egg timer or stop watch.
Oh and did I mention the Hacker Rank challenge was being asked before even speaking to a human being about the job?
These companies carrying water for Hacker Rank and their ilk ought to be totally ignored.
Programming is only a competition sport when it is voluntary, no need to turn access to a job into some kind of Christians versus the lions spectator sport.
If someone were to approach me with that kind of offer I think the response would not be much longer than one line (and I'd try hard to keep it polite).
>"These companies carrying water for Hacker Rank and their ilk ought to be totally ignored."
Yeah, I just deleted the email.
In the case of Atlassian and some others I have seen recently, the job of the recruiter now seems to be sending out links to hacker rank challenges after a successful key word search on a candidates C.V. I imagine recruiters love this because it means they now have to understand even less than the little they understood before.
> Oh and did I mention the Hacker Rank challenge was being asked before even speaking to a human being about the job?
I prefer that a lot more than scheduling a 1 hour phone interview during my workday, to do the same stupid questions over the phone, with someone who can describe neither the job nor the company.
Sure, that scenario is equally broken and happened to me with a big S.V company recently. If the company can't even schedule an interview with someone on the actual team then I'm probably not interested in the company and can and have cut the phone interview short upon learning that.
It's even better when you or a recruiter has sent them your resume and it's obvious that they've ignored it and are looking at your outdated LinkedIn profile.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in one_true_addition_test_case
AssertionError
The "correct" answer according to my pet parrot is (5).__add__(6). Of course, as per real-life, "correct" is usually just an aspect of if you find the interviewer's solution instead of whether your solution is actually correct. Better luck next time!
The sum built-in is still using the addition operator with syntactic sugar though (as is the other child comment). I think the intent of this question is to solve it with bitwise operators ie, [1].
If you're going to have to argue about the 'intent of the question' then you've already lost.
Really, the question should be 'add these two integers' and any solution that produces the result in a transparent and straightforward way should be honored with top marks.
Trick questions, especially those where only the interviewers pet solution is permitted are a sure sign that this employer is best avoided because they care about form and ego more than they care about getting the bloody job done.
The problem is that the multiplication blows the precision of the used format (either double for math.log(...) or the one the decimal module uses).
I had also thought about the intention of the question trying to get you to express the sum as bit operations (thought I admit that sounds like going for a very low-level profile), or maybe just "thinking outside the box".
I recently tried for the second time to apply to some agency type place. I have 10yrs of coding, worked with big data, have done micro coding( developed protocols for vending machine transactions), done frontend and backend small and large scale.
But this place had 3 of these type of challenges that you had 1hr 30mins to complete all of them. I'm sorry, most of this stuff you will NOT find in any job unless its some specialty. For an backend/angular/react developer you will not find this at all. I feel that companies that do this are out of touch with the correct way to find candidates. But on the second hand I also see why they do this since programming has became so much popular now a days.
In the same way that nobody ever got fired for suggesting IBM, I imagine there's a lot of HR / tech hiring apeing MS/Google/FB. Even if it makes no sense in their business development context.
I don't understand code challenges. Can you even program without Google and Stack Overflow anymore? And god-forbid being experienced in a dozen frameworks and libraries if it's not the exact combination the company uses.
Yes, you're expected to be able to write basic stuff like most of these challenges without Google or StackOverflow.
There are some however here that you really shouldn't be, like remembering specific algorithms - if you're just memorizing 20 different sorting algorithm's exactly implementation, clearly that's not producing much if any value for you.
If someone however was to give you this Algorithm section https://nbviewer.jupyter.org/github/donnemartin/interactive-... and tell you to implement it, that would prove whether you had the skills needed to properly use Google and StackOverflow for anything but a direct copy+paste and could be a very good test for a programmer.
Basic data structures, knowing when to apply them and how to solve fairly straightforward challenges like "does this string contain any duplicate characters" when you have full use of Python's builtin data structures are completely fair interview questions in my opinion.
Just picking 1 or 2 of these questions that seems vaguely related to your work is enough, combined with asking some basic technical stuff like "What's the difference between a class and an object?" can weed out a shocking number of people, showing they don't even have conversationally useful knowledge about a topic.
I've been coding for 4 decades. There are so many different standard libraries and function calls to remember that only the people that are one-trick ponies (one ecosystem, deep knowledge) are going to be able to do the majority of these without an outside reference.
And those are exactly the people you would not want for a job because they would most likely not have the flexibility to shift away from that ecosystem if there was a need because it would take them forever to get up to speed.
It's much better to know how to use outside references efficiently and to be able to 'swap stacks' in a couple of weeks than it is to be an expert in some language that could very well be obsolete, and that's assuming that there are many jobs left today where deep knowledge of only one ecosystem will get the job done.
The time when you could spend 4 years learning a specific library and language and then make a career out of that is long long gone. What we have today is more akin to enormous libraries sitting on fairly rickety narrow foundations where - if you're lucky - the documentation of that library is relatively good, and the half-life of the whole thing is measured in months, a year at best. Today you work in Python, 3 months from now in Go and a another 6 months after that it will be JavaScript, typescript, clojurescript, coffeescript or <insert your flavor here>.
So, you want to have your problem programmed in Assembler, C, C++, Python, PHP, BASIC (name your dialect), JavaScript, Erlang, Python or I don't care how many other languages that you care to name I'll accept the challenge. But I won't cut off my external memory bank in order to prove that that is something I can do.
(Well, truth be told I probably can do it in a couple of the above but that's mostly because there was a time when the world moved slow enough that you could actually invest a couple of years, but these days it is much better to know how to leverage open source and google than it is to know the gritty little bits of whatever language is in flavor this week.)
Of course, if you are looking for 'javascript programmers' it is fair to ask some questions about javascript. But in my experience it is a lot more efficient to search for programming talent and good debugging and writing skills irrespective of what language they are currently trained in.
Languages are a means to an end, programming is the skill you want, not 'a particular language programmer'.
I completely agree - that's why I encourage only sticking to basic questions and accepting fairly vague answers as long as they're in the right direction. Maybe your python knowledge isn't perfect, but you know how to solve it in another language. I'm good with that. Maybe you don't know the exact name of a data structure in the python standard library, as long as you can describe what you're looking for I'll point you in the right direction.
The people who don't know when to use a hashmap instead of a list of structs are the ones who concern me. So are the ones who just can't get started at all.
Sure, platforms change, languages change, but basic data structures don't, and basic OOP doesn't either. You've probably worked with many of the same data structures on many of those platforms and have a working understanding of OOP that you can describe - maybe not perfectly, but well enough.
Why should I ever write a merge sort when there are libraries for that? What if I have a better understanding of some other aspect of your system that you fail to identify and which your team is sorely lacking?
Not all programmers are good explainers. Someone who speaks English as a second language might have difficulty articulating the differences between two terms despite being able to code circles around you.
One of the best programmers I know explained Object Oriented Programming as "Imagine everything is a thing".
> Why would I be expected to write a merge sort when there are libraries for that? That's silly.
Definitely, work samples are better, for sure and there are other puzzles on there that are probably more appropriate for a simple challenge.
In practice I'd almost never expect you to implement a sorting algorithm and I'd probably bitch in code review if you did. However, on rare occasion for optimization you may need to - if you can't implement mergesort when you are given the algorithm and offered assistance in understanding it that probably reflects poorly on your ability to implement other things too and maybe to implement in general.
> Maybe I have a better understanding of some other aspect of your system than your team does but your questions will almost definitely fail to identify that.
Maybe you do, and of course you ask these kinds of questions too, work samples should be a main focus, but I immediately get worried if someone can't solve a basic problem and can't discuss programming using shared terminology - it makes discussing these things quite difficult.
> Likewise, one of the best programmers I know once defined Object Oriented Programming as "Imagine everything was a thing". And someone who speaks English as a second language might have difficult articulating the differences between two terms despite being able to code circles around you.
I'm not expecting a specific perfect answer, just a basic answer in the right direction for a question like that. "Imagine everything was a thing" is in that direction, I'd inquire more of course, but that's a good start. As long as you have a useful understanding so that we can have a discussion that's good enough.
You can definitely code without Google and SO. And you should. Once you have a grasp of what you are doing I think it's essential to not rely on those. Doing things only because SO says it it's usually a very good way of losing track of what your code actually does. They are there as a reference and as a help not as a main source.
Every company that has challenged me with a datastructure/algorithm coding challenge (not pseudocode whiteboarding) has given a full description of the algorithm and allowed googling of basically anything except "AlgoX implementation in Langlang"
I have always seen the interview process as a an opportunity to understand how the other person thinks. These questions may do that to some extent but they limit it in ways that make it almost mechanical.
I personally find "What is your favorite programming language?" or "What systems architecture to you find more compelling?" to be far more valuable. It gives some insight into their commitment to the trade and gives you an opportunity to make it a conversation to dive into their reasoning process and depth of knowledge about a topic. I think it's far easier to weed out the fakers if they can't express why Erlang is interesting to them than for them to tell you how to implement a linked list which is very easy to google and memorize(but won't necessarily prove any skill).
From what hear about some startups, the challenges are for the general candidate pipelines the company wants to weed out w/o fearing legal repercussions. The inside-track, friends, and frat candidates often get to skip right through the process or have copies of the problems/answers from insiders. (Illusion of merit.)
Why would startups need to do this? AFAIK, at least for small US companies, they can just hire anyone they want. They are not obliged to post the jobs or accept general resumes -- if they want to hire an insider, they just hire him, and no one outside would even be aware that there was a job offering.
I have heard about larger companies / government agencies having these kinds of interviews due to internal procedures, but hopefully most startups have more sane setup.
I think this view is a little bit cynical. Hiring your friends which usually happens in the early stages is often more about having worked with them before or having them come in based on a recommendation / knowing they're a culture fit ahead of time etc. It's a way for small startups to spend less time on hiring, and increase their chances of getting talent that they couldn't necessarily attract on the broader market in the early stages.
I appreciate the feedback and you raise good points. However, i'm curious what purpose many of these exams serve...some guesses:
- Because it is actually a good predictor of job success (does anyone think this is the case for anything but a handful of job roles?)
- To be adding a hurdle to target those who really want the job (downside: it also adds a hurdle to top-notch gainfully employed people who dont have time to study for such exams)
- Another is they just need something to reduce an otherwise overqualified pool of already qualified candidates (fair point, but at the risk of losing out candidates from concern one above.)
- Legal risk mitigation -- add a quantitative measure to show a facade of merit and process.
TBH, I have never been on the hiring side at a small firm, so I don't know. But I have seen several things:
1. People recruiting me to Amazon and practically giving me the exam questions (I declined mostly due to location) Same thing at two other firms (perhaps to get referral bonuses?)
2. An H1B process that is by-the-book correct yet entirely against the spirit of the program.
3. Tons of interviews, screenings, etc at big firms just to justify an already-decided inside candidate. This I witnessed firsthand and makes me really upset because of all the candidate time they waste and the false hope they give to people who were never in the running to being with.
Somewhat of a tangent, but that "Distributed design" document gives me hives - with the lack of lines from "write" or "async write" back to the readers or cache. The taming of stale data would be a nightmare to manage when it suddenly starts to matter.
Also, where's the completed response caching layer? Nothing destroys performance (and hikes costs) like having to recompute every page response from raw data.
And that doesn't even touch my annoyance at shoehorning "federated architecture" to mean "sharding with a different algorithm".
The advent of all of these coding challenge excercises makes not being able to do one in an interview a real show stopper. Glad there's all this material to learn from.
Not really. If anything it just makes coding challenges useless for distinguishing between people who can think, and people who can regurgitate.
It's one thing to "know" how to balance a binary tree because you've read up on and practiced the most common challenges - it's something else entirely to actually think through and understand a problem, then come up with a solution.
I'd much rather have someone who can describe the edge cases and the complexity of a problem but get the final solution slightly wrong (but hey, the know the edge cases so can write tests for it, right?), than someone who can regurgitate code they've written before without really understanding it.
>It's one thing to "know" how to balance a binary tree because you've read up on and practiced the most common challenges - it's something else entirely to actually think through and understand a problem, then come up with a solution.
There are too many things to know in the world in order to be able to know them all as deeply as we wish. For the things that we can get away with knowing the most common aspects of, time might be better spent elsewhere than to learn more about that thing.
Sure I agree as well. That there are so many resources and tools available to answer these coding challenges that if for whatever reason such a challenge was presented the minimum is to be able to answer thr challenge. Going beyond would be to know when to use such a data structure or its inherent strengths and weaknesses etc.
Which is why I think companies doing less whiteboard interviews and more real world coding tests make more sense. I think one could understand the limitations of a linked list and still use the built in data structures and be really competent.
It's really amazing a whole industry has now sprouted up around hacking the interview process - you have interview boot camps, specific books about cracking the coding interview, mock interviews services with actual S.V engineers. As long as you have memorized how to reverse the last k elements of a doubly linked list you are probably a safe hire.
To me it is reasonable to presume that a software developer candidate knows the basics of CS: algorithms, data structures and such but for anything that require deeper knowledge I think it should be only asked if it is mandatory for the job.
Sure if you want to maintain a certain "prestige" you should maybe use them to limit your applicant pool but in the end if your work is nothing but coding front-end JavaScript it doesn’t matter do you know how to traverse B-trees or not. And if a need somehow arises it is very easily googleable and implementable with good enough CS education.
I think what the interviews should be about, and the best that I have been, have involved some basics sure but also one had few “real-life” coding exercises and in another I had a code review about one of my project’s code (which I was surprised by). Nothing tells more about a person than having them to explain what is their passion but if you are only interested in people that can jump through some loops that’s the people you are going get. Maybe in that kind of company that's just how they do things.
I’d argue most people can intuitively tell, maybe not why, when a recruitment process feels “right” compared to one that is forced and does nothing but dehumanize you to a walking Wikipedia and ticket-monkey. But maybe in programmers there is also some that seem to like that idea so I’d say it is more of a skill and culture fit than anything else.
But in any case the whole process should be based on some structure that actually measures something real. I personally don’t like being reduced to few facts about missing knowledge on a spreadsheet which I could learn in a week or so. It hurts my ego and most of all makes not want to work in the said company in the future so when I’d know those things I won’t be reapplying.
With this type of post I always read comments complaining why they should know those things for an interview. They are not necessary. It is like some more knowledge is going to harm them. I wish I would know more about everything. Sometimes the connection between knowledge and how to solve a problem are not clear when you learn that knowledge. Why is it important to know how is the syntactic structure of a sentence if I want to be a developer? Because you might end working with compilers or natural language processing and that knowledge is gold. But you simple don't know in advance, and you wont be able to discover how to solve a problem if you don't have knowledge about the solution in advance. And there are also different ways of solving problem, some better than other or with different trade offs which you only know if you are aware of other possible solutions.
I don't think most people are complaining about gaining knowledge but the relevance of the test to job performance or the requirement to study whilst working and raising a family.
"... you wont be able to discover how to solve a problem if you don't have knowledge about the solution in advance."
Is simply false, whilst research is a skill all of its own it's most definitely possible to go off and learn key techniques when you need too. Relying on serendipity for knowledge to be useful is not very efficient after all.
At best the additional knowledge gained is a helpful side-effect.
It is not false, for example Albert Einstein was developing his Theory and he could do it because he was in a research ground where they were doing research about infinitesimal calculus, which was handy for Einstein to apply to his theory. But I think it wouldn't be clear to apply infinitesimal calculus to his theory if he didn't know it in advance. Or at least, we aware of it and its possibilities.
When you face a new problem, you just don't know how to find a solution, you based your hypothesis in the knowledge you already have. You can do research, but the more knowledge you have the less research you will do. And that time you save is money the company saves.
Anyway, you decide what to do with your life. If you are happier raising a family that is good. If you are happy learning stuff to apply to better jobs, that is good too. But obviously you cannot have everything because time is finite.
A good job and a good family life are eminently achievable goals in tandem. I personally spend my time out of work learning things outside my field just for the fun of it.
If you are happy with what you have that is even better. But some people wants to earn millions of $$ a year and be married with a Top model. If they don't have both, then it is not good enough for them. And a lot of people just want something better regardless what they have (forever unhappy people).
Thanks for this. Just a mild constructive criticism: not sure how useful the Anki deck is. One thing to keep in mind when constructing cards is keeping the information very short. As it stands, there's no way to really remember the flip side of the card. Aside from that, really solid info!
I've been using Python (among others) at work close to 3 years now. I get these challenges and problems, however, I don't find the solutions here "pythonic". I guess it depends upon the job criteria, but the code here [1], for example, is not a great demonstration of _Python_ language skills. It is a good demonstration that the person knows Mathematics, but if someone is interviewing for a Python dev, it's not what they should be looking for.
Might be slight off topic. But I'm sort of Python enthusiast for a long time but never been able to actually learn how to code. I'm sort of aspiring software developer but a degree in irrelevant engineering.
My question is, by studying/doing good amount of Python [currently working on it], going through the entire interview challenges presented here, seeking to gain Master degree in Management Information System [MIS], can I apply for job and be shortlisted which might require CS graduate and maybe graduate with MS in CS?
Let's just say for the sake of the comment, I won't be able to obtain a MS in CS because for numerous reasons.
It's possible. I got a job as a software engineer at Google despite having a degree in Economics and only knowing Javascript. I spent a lot of time studying data structures & algorithms on my own but it's do-able.
Depends on whether it requires or "requires". There are all sorts of jobs out there. Read the curriculum of a CS degree. Compare the topics to the job requirements. Can you do the job?
The problem is how much do dev really need to use this in real code. why would interviewer test someone on function he can just use existing lib for. why not think about how he plan an application and how will he design a flow. much more important.
I really appreciate the effort that went in to this. This is a perhaps the most thorough hand holding through a "Cracking the Code Interview" regimen I've seen thus far.
Why do we test algorithms and not design patterns?
It seems to me, after twenty years of building software, that the need for off-hand knowledge advanced algorithms is rare, and the need for knowledge of design patterns is exceedingly common.
This is great! Having an integrated environment with the tests included and solutions in a separate notebook is quite valuable for training to pass technical interviews.
I see a lot of people posting about how they dislike the technical interviewing process. The sentiment seems to be that for a certain class of popular developer role it's entirely unlikely that they will ever need to remember how to implement a heap or red-black tree to be effective at their job... so why use problems like this in an interview?
I think these sorts of questions should be used by teams that want to set the minimum standards on the team. I'm reminded of a rule used at the Recurse Centre: never feign surprise when someone asks a question (I paraphrase). The idea is that if someone asks you a question about something you think is fundamental, like Bash for example, don't immediately feign surprise and say, "You don't know bash?" It's not helpful. Instead take it as an opportunity to teach them something awesome. However this doesn't generally work in a professional setting where you're required to know Bash in order to function at your job.
If on my team we're responsible for the operation and maintenance of several millions of dollars worth of infrastructure that runs our customers' applications then there are certain minimum requirements for operating effectively within this team. I expect you would know Bash at a minimum. I should also expect you to know the network stack of the OS we deploy on, how TCP works, what a hypervisor is, etc. We may write most of our application code in a high-level language that abstracts these details away but that doesn't preclude you from understanding them. It's just a convenient tool. You have to know how to manage complexity and avoid premature pessimization in order to be effective and that means you will be interviewed using technical questions to screen for that minimum level of knowledge.
However I think most companies do tend to use this tool poorly. I've interviewed with startups that build consumer web applications that ask questions about radix sort and k-d trees. That's what I call, overkill. This trap is easy to fall into if your motto is something glib and banal like, "We only hire the best." If you don't quantify what "best" is you're just going to negatively filter out potential candidates and hire on bias. You have to scale the screening process to only test for that minimum competency and choose how much risk you want to take in teaching new hires.
If I'm hiring a more junior developer to our team I expect that we're going to mentor that person and make them a better engineer by working with us. They can ask questions that may be outside of our minimum standard. However if I'm hiring for a team-lead position I'm much more likely to not accept a candidate who cannot fly through our screening process. Working with them is going to be difficult and if it's 3 in the morning and they're causing more problems than fixing because they don't know how the TCP re-transmission protocol works then we're going to have a problem.
The conclusion of all this is: tweak the screening process to set the minimum standards required for effective communication on the team. Don't set the bar unnecessarily high: define what the bar should be and assess the risk of mentoring junior developers. You need a good mix across the spectrum for an effective team.
This feels way over-engineered. I just want to click and jump into a challenge, Project Euler stye. Every single page is overflowing with "Table of Contents", screenshots, code snippets, etc.
I'm a kick-ass get-things-done full-stack web engineer. I've never had to deal with one of these sorts of problems in my day to day work; and if I did, I'd just find an existing, tested, stable library that already handled them.
A company that needs someone to solve these sorts of problems doesn't want me on their team in the first place, nor would I thrive there. A company that just needs to build damn good web apps is losing out by using these sorts of questions in their interviews.
The best interview challenge I've had (actually, it was a take-home, with discussion in the interview proper) was about designing code for re-use and extension. It was a great indicator of the company's practical and mature approach to engineering, and of what they really wanted this hire to accomplish.