Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Your thoughts about online developer recruitment tools like HackerRank?
52 points by ninadmhatre on Feb 15, 2016 | hide | past | favorite | 50 comments
I recently appeared for the first time on online challenges as a first round of interview process on HackerRank, well experience was good but as i have participated in only 1 of such process, i don't have strong opinion about it. i have been part of live/shared coding interview, they seem to have hard limit as you must solve X% of challenges irrespective of how cleverly you solve the them? So not sure if that angle is even considered when judging the answer.

Just wanted to thoughts about such tools? do you think they do a good/bad job? Or is it proper approach to find the talent?




In my opinion they test the wrong things for the majority of software developers. Most people who write code don't need the ability to solve especially hard algorithmic problems. Ranking solutions to hard problems is probably useful if you're Google but there aren't many Googles out there. We like to think that we're solving hard problems (and actually, on HN, we're more likely to be), but most companies aren't. A large amount of software is literally just a CRUD form and a database.

For the overwhelming majority of software businesses it's far more useful to know if someone is disciplined enough to use good variable names than whether they can write an algorithm to count the number of edges in an acyclic graph.


I've recently been thinking of using some kind of reading comprehension test during interviews. Other than poor craftsmanship, the biggest disappointments among developers I have hired are the ones who seem to struggle to quickly read and absorb information. I think having good reading comprehension skills is essential for code as well.


A clean code testing tool would be far more advantageous. I understand the algorithmic challenges for a small percentage of jobs which need someone like that. However, most software development is about understanding needs, simplifying process and expressing yourself cleanly in the code. I wonder how much money is being wasted, especially in silicon valley, on bad hiring - i.e. the astronomical cost "mr complex algorithm" will cause to a codebase which would benefit from being simple.


You are right that the biggest part of the job is easy - but testing the easy parts is useless - all programmers would pass it. A test must be hard in some way if it is to filter people.

Disclaimer: I have a stake in https://codility.com/


I assure you this is not always the case. I have worked directly with multiple people who could contrive the most exquisite solutions to really difficult problems but whose code was gibberish. Flagging such characteristics during recruitment would be valuable.


Interesting - how would you flag it? In what way was their code gibberish?

In my experience people who understand complex concepts can also understand how to write clear code. Maybe those that you are writing about were not motivated to do that - or maybe it never occurred to them that this was needed?

Would you mind also explaining what you mean by this in "this is not always the case"? Because it does not seem to refer to my comment.


I guess that concise implementations (e.g. in a one liner) are often a complete bitch to debug when someone else comes back to it to fix a problem years later.

It is much more difficult to debug and fix existing code than develop it from scratch. Hence, if you found the problem hard to code in the first place, then when you come to debug it again later you've just screwed your future self.


I've also seen (severe) examples of the converse - e.g. writing an 800 line monster which could better have been expressed as a regular expression (and a simple one, at that).

This leads me to think that conciseness is not necessarily as simple as we tend to think. Sure, overly concise code with clever hacks are often a nightmare to debug, but that's mostly because they are the wrong abstraction and not capturing the problem well enough.


The problem with people writing too concise code is with their motivation not with their abilities - and so it is probably impossible to test (at least directly), because on the test they would have the motivation to do the right thing.


I disagree.

Knowing how much to abstract and writing clear code is indeed an ability.


No. If someone has to be "motivated" to write clear code, especially given that these tests are supposed to lead to jobs, then I would have to question their abilities. Writing clear code is an ability of itself.


The point was about code 'too concise' - this is different from 'not clear'.


if programmers are not actually solving similar problems as part of their jobs, sites like codility provide little value over standard IQ tests. all they do is waste the candidate's time and employer's money.


In my opinion programming is about solving programming problems. There are other important parts - like understanding user needs etc - but solving problems is at the core of it. In practice usually these programming problems are easier - but to test how a programmer is proficient in solving them you need to use harder problems or maybe do some programming speed tests (many easy problems).


I think focusing on algorithms (if that's what you're suggesting) would be tunnel vision, and as a result the output of such a system will be much more weakly correlated with productivity than it should be.

Beyond code literacy, the most valuable skills for a 99% of programming roles are empathy and diligence. Unless you're working on the kind of programming that's arguably pure mathematics, algorithms are just not important.

There are plenty of ways to test diligence; empathy is a little harder to test directly but there are reasonable proxies.

Disclaimer: I'm working on a product for applicant filtering (generic rather than coder-specific) https://www.beapplied.com/


you would be surprised how many "senior developers" I've worked with who can't write a decent CRUD app...


Agreed. I'm generally much more interested in whether a developer can understand a new problem they haven't faced before, can research possible solution, can compare various trade-offs of solving them and can then implement a solution efficiently and effectively on a constrained time schedule.


Just to make it explicit - what kind of trade-offs do you have on mind?

I have some stake in https://Codility.com - I could perhaps ask them to add a new kind of tests :)


This is how I prefer to use it as a tool. Rather than set up difficult questions and test some algorithm that can be found with a decent google, set up a small number of relatively straightforward challenges.

This then filters out anyone who just can't code, and you can review the passing tests for craftsmanship.


>> We like to think that we're solving hard problems (and actually, on HN, we're more likely to be), but most companies aren't. A large amount of software is literally just a CRUD form and a database.

I think it's simply 'well, everyone is doing it so...'. So for some people it feels just safer to do what a lot of others do, while others can't be bothered, and a just don't know any better. I'd believe that's a very common reason for this approach :)


I agree, where I work we use HackerRank (for most roles that require any coding, even designers) and we try very hard to set a challenge that is representative of typical daily work.


Agreed. If you're not doing that kind of stuff, then anyone you get from these tools is probably going to be bored out of their mind, and leave soon. Either that, or you're going to pay them an assload of money, and they're going to spend lots of time on "clever" things.


I haven't used HackerRank, but got experience with similar product Codility (https://codility.com/).

My experience:

1. Works great if engineers apply to you, much less if you try to poach a engineer from great company.

2. Great resumes or credentials sometimes don't correlate with great performance. Especially there are hidden self-trained engineers, who can build great products, but never had CS degree or worked in well known company. It can be competitive advantage if you can give standardized test to everyone who apply, when most companies wouldn't bother to invite them for an interview.

3. Still they can save a lot of time for both ends:

a) candidate usually prefer to spend one hour on an online test than to take a day off for full day of interviews

b) company can offer this test to anyone, time of senior developers is usually too costly to offer regular interview to anyone

4. It is not a replacement for regular interview process. You still have to interview candidate. Most of the time, signal from this tests is clear if they are used properly (e.g. give mix of tasks with different difficulties).


>"a) candidate usually prefer to spend one hour on an online test than to take a day off for full day of interviews"

One additional bit to add to this type of 'testing'. It's write once, "be interviewed" multiple times. Rather than the usual interviewing process that requires you to "write" some sort of test/interview every time they go to another potential employer.


Sorry but usually what happens is that a hackerhank / codility tests is just the first filter. You still have a full day of technical interviews in the company office afterwards..

Have been interviewing recently and most of the process have 3-4 steps..


Point no2. The people that built great products and have good cv's don't have great performance? And why would you want to filter out great product builders because they don't have a CS degree or worked in a big company before?


I think HackerRank is a great platform for challenges but I personally believe it only shows a small part of your skill set. I am a developer myself but also have hired a lot of engineers in my career. I believe that looking at existing projects you've done on Github gives a much better overview of your coding style, knowledge and your ability to learn. It also exposes some of the biases I know that we have in our own company (for instance, we know that if you have a functional programming background you'll fit in much better in our code base and team, even though we don't do functional programming).

All of this said, I think HackerRank is a great place to improve your skills in solving interview challenges and algorithmic problems.

Disclosure: I am a co-founder at source{d} where we analyse all git based projects to understand developers through their code.


From experience it's very hard to know how a candidate will perform when hired, irrespective of the hiring process.

A better recruiting model IMO would be to recruit people on short-term contracts (e.g. 6 months) with the contractual agreement that the worker will transition to a permanent job at the end of the 6 months providing their performance is inline with expectation.

I do use code exercises myself, but it's not some n queens problem or something - which tests nothing practical (for the positions in my team anyhow). Rather it's a simple problem, where I look for their ability to test drive code and clean code. The candidate then doesn't get pressured as they might in a live situation. If the code is good we walkthrough it in person.


"A better recruiting model IMO would be to recruit people on short-term contracts (e.g. 6 months) with the contractual agreement that the worker will transition to a permanent job at the end of the 6 months providing their performance is inline with expectation."

That already exists. And it leads to abuse of those contract workers, because they think that if they bust themselves just a little bit more, they'll have that carrot (the job). But they're so burnt out at the end of the contract period, and the employer inevitably says "No", and then brings in a new class of fresh faced hopefuls.


Your solution looks good on paper but if a candidate has two offers one permanent and the other the 6 month to permanent , I am pretty sure most would go for the permanent offer.


In my neighbourhood it is normal to hire people for a trial period first, up to three months (this timespan is regulated by law).


"Programming competitions correlate negatively with being good on the job" see https://news.ycombinator.com/item?id=9324209 for the more nuanced actual result.


(Disclaimer: I have a stake in https://www.kattis.com/)

Solving a programming challenge is not the same as winning the ICPC. And that is what a lot of people here are saying it matters how you use these tools. If you ask developers to solve the very hardest challenges used during the ICPC you are probably going to waste their time and you might miss out on candidates who would have done a good job even if they couldn't solve that challenge. But being asked to have a basic understanding of algorithms, which is what is needed to solve the average challenges, is something that I think should be taken for granted whatever position you are applying to.


I've been asked to do a few of these a while ago while searching for a job. Don't remember if I did HackerRank though.

I don't really like it. I scored well in most that I did, but as some folks mentioned, they either test the wrong things, or have horrible UI's.

Also, there are some that end up having multiple choice answers that are really badly done (bad question syntax, having 2 correct answers, outdated questions, etc).

I personally refuse to do any of them nowadays, even if I am really interested in the company. I just can't justify the time and (in case of badly done) frustration so companies can' save a bit of time.


their code editor is a joke.

ui doesn't make it clear switching tasks will lose your work, and to make it worse they silently break the clipboard so you can't "cheat" thus preventing you from storing your code anywhere. makes experimentation really awful as you have to keep commenting the whole thing every try


Now you made me want to register just to check if my Vim with ItsAllText works.


I failed an interview a couple days ago, from a hackerrank challenge. The tools was awesome, but could be better. I got really nervous, and could clear the challenge 10 minutes after the stipulated time in fact. One of the things that made me most nervous was that i was not able to debug my code as i can easily on chrome console. The "click to run your code" gives you a bit of a harder time. (my opinion)


When I completed Hackerrank tests for my last role, I used an IDE and then copied and pasted the code into the browser.

This gave the benefit of a familiar working environment, and I could quickly TDD as needed. I highly recommend this approach when faced with these tests. That and working out in advance an appropriate way to accept input.


I've been in the industry a while and if a company/recruiter uses tools like HackerRank for recruitment, I simply ignore them and decide not to move forward. I think we as a software industry have been following a bad hiring practice. If you can't tell your fellow colleague is smart enough by having a tech-talk or giving something that's not a test-format, there is something wrong with the process or people doing the hiring. Also, it is hilarious how many times an engineer has to prove himself/herself, and after being hired at these places I've also seen that the people who do the hiring are not that good/competent themselves, although this may vary in some other place.


I didn't know about HackerRank. The concept seems quite similar to Stockfighter. Any idea how it compares?


HackerRank's core product is a tool that companies can use to send programming tests to potential candidates. It's typically used to screen candidates prior to an interview.

The Stockfighter-like concept is newer for them; it seems like they're also trying to work the pipeline from the other direction as well (i.e. finding good candidates for companies, instead of just testing candidates that have already applied for a position).


I think these tools underestimate the impact stress has on coders. I would dare to say more than 30% of ability can be impacted. Since coders rarely write code under extreme stress unless they work for stock market apps I don't see the point of having such a huge confounder.


(Disclaimer: I have a stake in https://www.kattis.com/)

Not all of "these tools" use stress as part of the evaluation. Kattis is one example that allow candidates unlimited amount of time. However it should be noted I think it's possible to increase the time using these other tools up to some max limit.


Hacker Rank is to hiring what Tinder would be to dating if profiles were made up of genital pics and length/depth measurements.

It'll give you a clear measurement of a candidates ability to do CS homework and not much else.


I've been on both sides of HackerRank. I helped develop/configure a HackerRank test for candidates for my team, and recently I've taken a HackerRank test while applying for a position at another company.

HackerRank is just a tool. Its effectiveness depends on how well the company interviewing candidates configures it. I think algorithmic questions are the most popular, but it's completely configurable; you can have it ask whatever you want. It's also possible to manually review submissions. When we used it to evaluate candidates, we'd manually review the code for candidates that scored somewhere in the middle of the range. Depending on what their submission looked like, we'd decide whether or not to proceed with them. (HackerRank lets you see each version of the code attempted by the user, in addition to the final solution submitted.) We actually found it particularly efficient at finding good candidates; there was a very high correlation between interview performance and HackerRank score. If properly configured, HackerRank makes it easier to identify good candidates, which is (IMO) a good thing for everyone. For companies, it means that they spend less time interviewing bad candidates, and for candidates themselves, it means that they might be able to get their foot in the door somewhere where they'd usually get blocked by the "resume scanner" filter (since the company isn't risking engineer time/productivity to send out a simple HackerRank evaluation).

That said, HackerRank isn't perfect. My biggest complaint is the lack of feedback for some failure modes, most notably segfaults and failing non-sample testcases. For segfaults, it simply returns "segmentation fault" and you're expected to be able to find the problem (a similar tool I've seen, coderpad.io, dumps a stack trace). In some algorithmic questions, non-sample testcases include data that is vastly more voluminous than the sample data (which is intended to catch non-optimal implementations of the algorithm), but this isn't obvious at all. It would be nice if the non-sample test cases had titles (e.g. "extremely large input" or "edge case") so you could theorize about why yours failed.

People who have experience using HackerRank have a definite edge over candidates who have never used it before. If you are planning on taking a HackerRank test for a position, I would recommend trying some open questions on their site first. I also recommend having a local text editor, compiler, and debugger ready in case you hit a segfault that isn't immediately obvious. If your solution fails with "Terminated due to timeout", or your code works on all the sample cases but fails/crashes on the hidden cases, then your algorithm is likely not efficient enough (in the timeout case, look for ways to speed it up; in the 'mystery crash' case, look for ways to reduce memory usage). Lastly, if you have extra time after completing a HackerRank test, I recommend making sure your code is as clean as possible and is well-documented (but not over-documented), in case they decide to manually review it.


(I work at Hackerrank) A couple points:

* the stack trace -- We avoid showing stacktraces to candidates as it has the potential of leaking information (a well created error message/stack can leak test case data).

We can and should show the trace (for languages that provide it) to reviewers. I've added an item to our internal tracker.

* Familiarity of environment: Most (if not all) of the challenges companies created by companies link to a sample test (hackerrank.com/tests/sample). We notice many but not all candidates take this before the actual challenge. Perhaps we should highlight this more.


I could not attempt 1 challenge out of 4 because i didnt get the question, and needed more information maybe another sample data? I was struggling for 30+ mins to understand the question but couldn't get any clues. I could not complete last challenge because of not understanding the question/pattern of question (maybe its my fault but a hint or 2 would have helped) rather than my coding ability. Maybe you can (as you work there) add 1 or 2 (optional) hints & make at least 2 samples of test data for tough questions. Anyways, it was a different and nice experience!


Sorry for the confusing wording; I intended for "segfaults" and "failing non-sample testcases" to be interpreted as separate things. Showing stack traces or any other info from the program's execution for non-sample cases would be a bad idea. However, if it segfaults for a sample testcase, showing a stack trace should be fine (and this will probably catch the vast majority of segfaults).


But what is HR's "rank"? It seems your submitted code either passes the test cases, or it doesn't, so the score is boolean? Or is the time spent writing it, or the number of submissions, factored in to calculate a more precise score?


The score is based entirely on the number of testcases you pass. If the company in question wants to look at other metrics, they need to do it manually.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: