Hacker News new | past | comments | ask | show | jobs | submit login
Anthropic: "Applicants should not use AI assistants" (simonwillison.net)
466 points by twapi 1 day ago | hide | past | favorite | 398 comments





I'll be the contrarian and say that I don't find anything wrong with this, and if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.

sshine reply above is coming from a very conflictual mindset. "Can I still use AI and not be caught? Is it cheating? Does it matter if it's cheating?"

I think that's a bit like lying on your first date. If you're looking to score, sure, it's somewhat unethical but it works. But if you're looking for a long term collaboration, _and_ you expect to be interviewed by several rounds of very smart people, then you're much better off just going along.


> I don't find anything wrong with this

It’s not about being wrong, it’s about being ironic. We have LLMs shoved down our throats as this new way to communicate—we are encouraged to ask them to make our writing “friendlier” or “more professional”—and then one of the companies creating such a tool asks the very people most interested in it to not use it for the exact purpose we’ve been told it’s good at. They are asking you pretty please to not do the bad thing they allow and encourage everyone to do. They have no issue if you do it to others, but they don’t like it when it’s done to them. It is funny and hypocritical and pulls back the curtain a bit on these companies.

It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.

https://youtube.com/watch?v=m2v9z2S5XzQ&t=190


The LLM companies have always been against this kind of thing.

Sam Altman (2023): "something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points"

3 years ago people were poking fun about how restrictive the terms were - you could get your API key blocked if you used it to pretend to be a human. Eventually people just used other AIs for things like that, so they got rid of these restrictions that they couldn't enforce anyway.


Interesting that this quote really contains "sender" where "recipient" was intended, but it had absolutely no impact on any reader. (I even asked Claude and ChatGPT if they noticed anything strange in the sentence, and both needed additional prompting to spot that mistake.)

https://x.com/sama/status/1631394688384270336

Thanks for this heads-up, by the way, I've missed this particular tweet, but eventually got into exact same observation.


Wow I completely didn’t notice that until I read your comment. My brain must have automatically filled in the correct word. I had to go back and re-read it to confirm.

Well English is not my first language, so I probably tend to go through text more slowly and/or scan the text differently, so I have higher chance stumbling upon these oddities. (I can confirm that sometimes I see unexpected amount of misspelt homophones or usage of strangely related words.) Seeing two distinct LLM chats gloss over this particular nuance in almost identical way was really interesting.

Grok, on the other hand, has absolutely no problem with concept of sender both expanding and compressing the message, and with absence of recipient. Even after super-painstaking discussion, where Grok identified the strange absence of the "recipient", when I asked him to correct the sentence, he simply changed the word "sender" to the word "themselves":

> something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and *themselves* using ChatGPT to condense it into the key bullet points

https://x.com/i/grok/share/mwFR2jQ9MS6uVemgiHokJKiBd (cringe/pain warning)


Typical English language learner, apologizing for having a better grasp of English than a native speaker.

Google running the "Dear Sydney" ad is in strong disagreement with that claim.

> It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.

Well, no. It's irony, but it's only cognitive dissonance in a comedy show which misses the nuance.

Most pro-gun organizations are heavily into gun safety. The message is that guns aren't unsafe if they're being used correctly. Most of the time, this means that most guns should be locked up in a safe, with ammo in a separate safe, except when being transported to a gun range, for hunting, or similar. When being used there, one should follow a specific set of procedures for keeping those activities safe as well.

It's a perfect analogy for the LLM here too. Anthropic encourages it for many uses, but not for the one textbox. Irony? Yes. Wrong? Probably not.


Huge miss on the gun analogy. The likes of NRA are pushing for 50-state constitutional carry. Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.

There’s probably actually some other hidden factor though, like the venue not allowing it.

Edit: FWIW those late night TV shows are nothing but rage bait low brow “comedy” that divides the country. But the above remains true.


> Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.

That's not what the NRA is pushing for, any more than there are Democrats pushing for mandatory sex changes for all kids (yes, this is cited on similar right-wing comedy shows, and individuals on the right believe it). Pushing for a right doesn't mean 100% of the population will exercise that right.

And yes, most venues (as well as schools, government buildings, etc.) will not allow guns. If there's a security guard, police, or similar within spitting distance, there isn't a reasonable self-defence argument.

One of the interesting pieces of data is looking at 2nd amendment support versus distance to the nearest police station / police officer / likely law enforcement response times. It explains a lot about where support / opposition comes from.


The NRA is absolutely in favor of constitutional carry [0] and permitless carry [1].

[0] https://www.shootingillustrated.com/content/constitutional-c...

[1] https://www.nraila.org/articles/20210413/texas-permitless-ca...


Please reread what I wrote. You should correct your statement to:

"The NRA is absolutely in favor of A LEGAL RIGHT TO constitutional carry and permitless carry."

I have a legal right to spend all of my money on Pokemon, (in my jurisdiction) to pro-Nazi free speech, to paint the outside of my house bright pink, or to walk around wearing a mankini in the middle of the winter. Very few of the people who advocate for me to have those rights advocate for me to actually do any of those things.


Are you really arguing that it's okay for the NRA to support dumb laws because most people won't make use of them?

>And yes, most venues (as well as schools, government buildings, etc.) will not allow guns. If there's a security guard, police, or similar within spitting distance, there isn't a reasonable self-defence argument.

Can you give me one example of a valid "reasonable self-defence argument"? Legit question.


The extreme scenario:

I live in a home surrounded by miles of fields. There is no one within miles to hear me scream. Without a gun, anyone could come by my home, kill me, rob my home, and be gone before the police would even show up, if I even had a chance to call them. If I didn't call the police, they could literally move in and stay for months before anyone would notice.

The reason this does not happen is because everyone has a gun. Everyone knows I have a gun. If I see you coming on my property, I WILL shoot you. You don't know if the first shot will be a warning shot, birdshot, buckshot, or a 5.56×45mm NATO. You might get lucky and I might not spot you. Or you might be crippled for life. Without guns, crime is free. With guns, crime doesn't pay.

That's a scenario surprisingly common in rural America, parts of Appalachia, and other very low population density areas.

Now, I actually live in a dense city. There's a police station a few hundred yards away from virtually anywhere I might go. There are security cameras everywhere, thanks to Ring, Wyze, and friends. The city has a ShotSpotter system.

Crime rates are low, and more guns don't make me (personally) safer. Most of my neighbours want to ban them. However, I can understand there's a bias there.

As a footnote: If it were possible to hold clear conversation, I think there are solutions which work for everyone. However, people talk across each other.


> The reason this does not happen is because everyone has a gun.

Probably not. The reason we're not permanently locked in a life or death battle against each other is that very few humans like committing violence. It's a pretty terrifying view of the world to think that all that's preventing someone from perpetrating a home invasion on you is the threat of violence.


> very few humans like committing violence

How many people commit violence, and how many people are victims of violence, are two very different things. You could live in a society where only 1% of people commit violence, and yet the remaining 99% are living in fear, because each of them was repeatedly a victim of violence.

But if you have 1% of people ready to initiate violence, and let's say 3% of people willing to use violence in self-defense, suddenly life becomes much safer for you, even if you are among the remaining 96%. Not because the bad guys would hesitate to hurt you, but because they are likely get in trouble before they get to you.

People often confuse these two numbers. For example, they look at some statistics and think "20% of women report having been victims of domestic violence... oh, that means that 20% of men must be violent abusers", and they don't realize that the statistics also include some violent men who abused five or more partners each, so the actual number is probably much smaller than the 20%.


Without wading into the "good guy with a gun" debate, tl;dr: almost no humans want to effect the level of violence required to execute a home invasion, even if the risk of being shot is zero. A big deal is made about guns as deterrents, but the simpler answer (and the one that explains why it's also safe in rural areas of other OECD countries with gun control) is that humans just aren't that violent--when there's enough to go around anyway. That's all I'm saying here.

Your scenario _sounds_ convincing, but does it really work? Surely an attacker has a massive advantage in the element of surprise. If you see them coming (short of some sophisticated surveillance system), it's because they were impatient.

Or I have a dog.

> The reason this does not happen is because everyone has a gun. Everyone knows I have a gun. If I see you coming on my property, I WILL shoot you. You don't know if the first shot will be a warning shot, birdshot, buckshot, or a 5.56×45mm NATO. You might get lucky and I might not spot you. Or you might be crippled for life. Without guns, crime is free. With guns, crime doesn't pay.

Your perceived safety might be higher because you have a gun. This absolutely does not correlate with reality, extensive literature has looked at the perceived/real safety measure. Very rich resource linking peer reviewed research: https://www.americanprogress.org/article/debunking-the-guns-...

Anchoring it to your reality though, have you ever shot anyone invading your property with your gun to act as counterfactual? How many people in your area shot invaders? What about accidents and misuse? I do not mean to minimize your experience and how safe you must feel, but it would be naive to close a serious matter like this with just your perception.


So the problem with a survey like this is that it does not break out among the scenarios I listed:

1) Rural, minimal police, minimal government, large plots, no collective security.

2) Dense, urban, heavy policy, significant government, right housing, extensive collective security.

Indeed, it focuses on the latter. Virtually all of the addresses, photos, and stories talk about cities, or at least towns.

I don't want to over-post so I'll answer the other comments too:

1) Violence does not require more than "very few humans" to "like committing violence." The point of security isn't to protect against the typical individual but the violent outlier.

2) Most violent individuals aren't sophisticated. What's more, one instance of violence has little impact. Serial violence does. If an individual robs one house, that's not enough to live off of. If an individuals robs houses regularly, in an area with guns, they will be shot. That's a pretty good deterrent.

For gun safety to move forward, both sides need to understand each other, and everyone needs to address the major issues of gun advocates, such as:

1) Day-to-day safety (on the scale / in the settings I described)

2) National safety (if Jan 6th had worked, and we had a coup; if China invaded; etc.)

3) Rule-of-law (we do have a 2nd amendment, and changing that would require an amendment)

Otherwise, it's simply a push of more guns versus less guns, with idiotic laws being shoved through opportunistically on both sides.


> yes, this is cited on similar right-wing comedy shows, and individuals on the right believe it

Can you give an example? Of course you can find 2 people in the US who believe it, and they held 2 comedy shows where it was said, and it's technically true, but I don't think I've ever seen anything like this said.


I don't log all comedy shows I see, so I can't provide a citation off-hand, but I've heard it plenty of times. However, to see consequences, I might start by reading executive orders:

https://www.whitehouse.gov/presidential-actions/2025/01/prot...

And follow the trail back to how they got there.

https://www.breitbart.com/education/2022/07/07/aclu-national...

You can look around. You'll see many other articles like this one. As with most things, this is distilled into more inflammatory posting once it hits social media or comedy.


> I don't log all comedy shows I see

I don't see the point of this snark. If you don't have any examples of what you're saying, why reply at all?


I'm not entirely sure you picked the best example because the Democrats aren't pushing for that to be a right at all. It's certainly true that Republicans bought into the hysteria, my home state passed a bill banning it despite it having never once occurred and such a thing already going against the standards of care.

But Constitutional Carry does allow for anyone who can legally acquire a gun to be armed if they choose. I honestly don't mind this since basically anyone can get a concealed carry permit already and these bills just remove the paperwork and fees. I would love to see annual car registration done away with in the same manner, pointless busywork.

So if you're doing a bit on a comedy show or news program that's "what does $bill maximally allow for" then you do get everyone is armed in public without a permit (which again is fine I don't know why people care, this could already happen right now) but you don't get "every child gets a sex change."


There are LEOs that were prosecuted by states and the federal government for not taking action while children were being shot by another child.

LEOs are expected to take fire to protect civilians. Protect & Serve is their credo.

I wouldn’t trust LEOs to protect me, so I sure as hell fire am not trusting a low paid rent-a-cop to perform a similar duty.

Nope. I believe that my mindset is prevalent and not an outlier.


I would trust a security guard more. They have consequences for misconduct or failure to do their job. (Assuming they aren't an on-duty LEO who is "overemployed.")

Not enough that I think they'd protect me in a situation that requires a gun. Just more than a cop.


A security guard's job is to act as a witness and deterrent, not to intervene to protect you.

Only a private bodyguard can be expected to fight for you.

Observe and Report.


it's interesting to me how easily you can fact check the statement:

> Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.

yet, you claim that it's the late night TV that divides us, while making sure to double down on your misleading statement.

The NRA doesn't "ban guns at their conferences", they have been banned at small parts of a multi-day conferences e.g. where Trump was speaking because that was a rule established by the secret service and they complied for a small part of the conference.

When the majority of a conference allows guns, it's simply a lie to claim that guns were banned. An unintentional lie, I'm sure, but it seems likely to be the result of you believing some headline or tweet and accepting something wholesale as truth because it fit your narrative. I'm guilty of the same, it happens, but hopefully we can both get better about portraying easily fact checked things as the truth.


Maybe I’m wrong, but while we’re fact checking, can you provide a source?

I'm very skeptical that you're having a hard time sourcing this information. I have pages from my google search with easily 75% of the results confirming my claim.

Either way, here you go:

https://www.usatoday.com/story/news/factcheck/2022/05/27/fac...

> "Restrictions are in place exclusively at the NRA-ILA Leadership Forum at the direction of the United States Secret Service," spokesman Lars Dalseide wrote in an email to USA TODAY. He called the claim that the NRA is banning guns at its conference "incorrect."


These are the same people that insist we arm elementary school teachers and expect those teachers to someday pull the trigger on a child instead of having proper gun laws.

There is no irony.


If you think proper gun laws would keep guns away from evil people in the US, please explain to me why the war against illegal drugs in the US has been losing since the day it started.

Sure, some places in the world can successfully limit gun access. Those places aren't close and even bordered by the most active cartels in the world.

Just as a fun thought exercise, consider that to grow the plant necessary to produce just a single drug, cocaine, for the country, every year. It takes at least 300,000 acres, or roughly the size of los angeles. That's after decades of optimizations to reduce the amount of land needed. It's also only for one drug among a vast number that are regularly consumed in the US.

In relation, you can 3d print guns at home. Successful builds have been made from some of the cheapest 3d printers you can find.


Regarding the drugs vs guns comparison, I bet you'd find that every country that has implemented reasonably effective gun control still has thriving illicit drug markets. Australia is just as zealous at persecuting the drug war as the US but continues to fail at that whereas gun crime is very low.

Australians, for example, don't want guns relative to Americans. It's not hard to ban things people don't want. Prior to 1996 gun laws in Australia, home ownership was like 15%. In the US it's been as high as 45% in recent years. As far as the number of guns owned, we have more guns than people. So yeah, taking a country with 1/3rd the ownership down 75% with regulation that isn't bordered by cartels isn't that shocking to me.

You would never achieve that in the US and that's incredibly obvious to me by looking at the gun crime stats in places that do already have gun control laws in the US.


Drugs are more addictive than guns typically. I am not sure your comparison is useful.

yet in the US just as many households have reported using drugs as own guns

When have gun laws ever stopped a shooting?

When have criminals EVER followed law, code, rules, or even a suggestion from their fellow citizens?

Believing laws deter criminals is almost criminally insane and beggars all logic.

After all of the accumulated evidence against your belief, you still believe laws deter criminality.

The death penalty doesn’t deter criminals. How could words possibly have an effect?


The US firearm mortality rate was 5x that of the nearest high-income countries in 2019 [0]. The US had 120 firearms per 100 people in 2018 with 80% of all homicides being gun-related [1].

Those statistics may not be wholly attributable to differences in gun laws but it seems a stretch to suggest they're unrelated.

[0] https://www.linkedin.com/pulse/us-midst-public-health-crisis...

[1] https://www.bbc.co.uk/news/world-us-canada-41488081


Because most of gun safety is not actually about criminals; it’s about regular people with legal firearms becoming involved in crimes of passion, tragic accidents, and suicides.

America is the worst industrialized country for gun deaths because the guns are present to enable those things to happen. Countless studies show that the key to reducing gun deaths is not more training, more “good guys”, or whatever else — it’s simply having fewer gun households period.


Somehow, in Switzerland every household has a gun, and yet they don't have the American levels of crime.

And indeed, a high proportion of gun deaths in Switzerland are suicides:

https://en.wikipedia.org/wiki/Firearms_regulation_in_Switzer...

This tracks with:

- High gun ownership

- Strong gun responsibility culture (guns are safely stored, so fewer accidents)

- Strong gun regulation (guns require permits that are not issued to those with criminal records)


> When have gun laws ever stopped a shooting?

Have you heard of Australia? https://www.sydney.edu.au/news-opinion/news/2018/03/13/gun-l...


Australia has no land borders. It's one of the easiest places to secure, and thus it makes sense to do so.

You're really going with a position like, "criminals will disobey laws therefore it is pointless to have laws," huh?

I don't expect that, so I won't be using public schools.

I have no illusion that safety or education is an actual concern in public schools in general.


They do, but safety and social control go hand in hand.

In any case, its not as if your kid is safer at a private school. Kids are violent, no matter where they are; maybe you remember going through school yourself?


My guess is it's more due to insurance at the venue. I don't know who pays in those situations, but I would imagine they require "no guns" posted and announced. And if there is any form of gunshot injury they have very strong teeth to dodge the claim.

The truth is that you can carry concealed at the NRA convention. It's a myth that they don't allow guns there.

The Secret Service disallows normal people to have guns in the same room where their protectees are speaking. If you want to watch one of the speeches and stay armed, there are other conference rooms / ballrooms with big monitors set up. Many attendees take advantage of this offer.


> Well, no. It's irony, but it's only cognitive dissonance in a comedy show which misses the nuance.

Like the nuance between sending out your love and doing the Nazi salute? Or different?


You're wrapping all the AI companies up in a single box, but:

* Most of the AI you get shoved down your throat is by the providers of services you use, not the AI companies.

* Among the AI companies, Anthropic in particular has had a very balanced voice that doesn't push for using AI where it doesn't belong. Their marketing page can barely be called that [0]. Their Claude-specific page doesn't mention using it for writing at all [1].

You seem to be committing the common fallacy of treating a large and disparate group of people and organizations as a monolith and ascribing cognitive dissonance where what you're actually seeing is diversity of opinion.

[0] https://www.anthropic.com/

[1] https://www.anthropic.com/claude


I don't see the cognitive dissonance here. If a model was applying for a position with a cosmetics company, they might want to see what the blank canvas looks like.

Being able to gauge a candidate's natural communication skills is highly useful. If you're an ineffective communicator, there's a good chance your comprehension skills are also deficient.


The same could be said about a lot of things, like being able to write a functional solution to a leet code puzzle on a black board in front of an audience.

IMHO, an effective interview process should attempt to mimic the position for which a person is applying. Making a candidate jump through hoops is a bit disrespectful.


IMO the interview process should help the employer correctly identify qualified candidates. Respect for the candidates is important but some candidates should absolutely be prepared to jump through hoops.

Jumping through hoops isn't the only way to demonstrate professional skill. I am the lion tamer, not the lion.

> If you're an ineffective communicator, there's a good chance your comprehension skills are also deficient.

We are quickly moving into a world where most communications are at best assisted by AI and more often have little human input at all. There’s nothing inherently “wrong” about that (we all have opinions there), but “natural” (quotes to emphasize that they’re taught and not natural anyway) communication skills are going to be less and less common as time marches on, much like handwriting, typewriting, calligraphy, etc.


One of the oldest computing principles is "garbage in, garbage out". The person with better native communication skills and AI will still outshine the one that does not with AI because the best AI in the world isn't going to recover signal when there is only noise.

Agreed for native skills, yes.

It's sardonic rather than ironic - irony is sarcastic humor devised to highlight a contradiction or hypocrisy; while sardonic is disdainful, bitter and scornful.

It'd have been ironic if Anthropic had asked the applicant not to use AI for the sake of originality and authenticity; but if the applicant felt compelled to do so, then it better rock and wow them to hire the applicant sight unseen.

It's sardonic because Anthropic is implying use of AI on them is an indication of fraud, deceit, or incompetence; but it's a form of efficiency, productivity or cleverness when used by them on the job!


> It's sardonic rather than ironic

Doesn’t seem like that to me, after reading the dictionary definitions.

Sardonic: grimly mocking or cynical.

Ironic: happening in a way contrary to what is expected, and typically causing wry amusement because of this.

Pretty sure I meant the second one.


Yes, and they should state that they also don't use AI in the selection process.

They don't because they do. However maybe the Anthropic AI isn't performing well on AI generated applications.

I think they will get better results by having applicants talk to an AI during the application process.


> We have LLMs shoved down our throats as this new way to communicate

I don't think that's true at all.


Either than glorified template generators, real professional software developers are only using coding assistants because it's being shoved down their throats: https://youtu.be/H_DBqWI8hJw?t=2072

Making your comms friendlier or whatever is one of the myriad ways to use LLMs. Maybe you personally have "LLMs shoved down your throat" by your corporate overlords. No one in their right mind can say that LLMs were created for such a purpose, it just so happens you can use it in this way.

LLMs are sold by corporate overlords to corporate overlords. They all know this is what it will be used for.

The writing was on the wall that the main use will be spam and phishing.

You can say the creators did not intend on this purpose, but it was created with knowledge that this would be the main use case.


LLMs aren’t making your comms friendlier; they’re just making them more vapid. When I see the language that ChatGPT spits out when you tell it to be friendly, I immediately think ‘okay, what is this person trying to sell me?’

You are you, here. Think of the statistically average person and what they consider friendly.

Now imagine a world where most kids were raised with this bullshit and it's normal to them.

Most kids were raised while inundated with advertising but don't talk like a TV commercial.

Respectfully, I disagree.

Like and subscribe.


I'm with you, I'm very surprised by the amount arguments which boil down to, "Well I can cheat and get away with it, so therefore I should cheat".

I have read that people are getting more selfish[1], but it still shocks me how much people are willing to push individualism and selfishness under the guise of either, "Well it's not illegal" or "Well, it's not detectable".

I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.

I guess that puts me at a serious disadvantage in the job market, but I am okay with that, I've always been okay with that. 20 years ago my cohort were doing what I thought were selfish things to get ahead, and I'm fine with not doing those things and ending up on a different lesser trajectory.

But that doesn't mean I won't also air my dissatisfaction with just how much people seem to justify selfishness, or don't even regard it as selfish to ignore this request.

[1] https://fortune.com/2024/03/12/age-of-selfishness-sick-singl...


> I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.

No, what you are is ignoring the context.

This request comes from a company building, promoting, and selling the very thing they are asking you not to use.

Yes, asking you not to use AI is indeed a polite request. It is one you should respect. “The zeitgeist” has as much people in favour of AI as against it, and picking either camp doesn’t make anyone special. Either stance is bound to be detrimental in some companies and positive in others.

But none of that matters, what makes this relevant is the context of who’s asking.


I didn't miss that context, I understand who Anthropic are.

That may be true, but your first response still doesn't seem to account for that fact.

Why does the company that's asking change the analysis here? Shouldn't they know better than anyone the limitations of their product?

Are you implying that Anthropic specifically pushes for their models to be used inappropriately as long as they're not the victims of that inappropriate use? Because I haven't seen that at all with Anthropic, they've been consistently the most subdued and reserved AI company out there, barely marketing their products at all and when they do, doing so very carefully.

Your reactions in this thread are understandable as reactions against the oversaturation of AI, but it's not really fair to paint all of the companies with the same brush when Anthropic exists to be a foil to Altman's irresponsible push for saturation.


Id say that if a candidate can demonstrably 5x their performance with LLMs then I'd be keen to hire them.

By banning LLM usage I think Anthropic is just indirectly admitting that their assessments cant distinguish lameduck LLM reliance and genuine increases in productivity.

This is certainly their prerogative but it's still a pretty bad look - like banning calculators in a math exam.


>I'm with you, I'm very surprised by the amount arguments which boil down to, "Well I can cheat and get away with it, so therefore I should cheat".

Have you seen the job market? Companies will treat you like garbage through the interview process, make you jump through pointless hoops, and then even if you get the job you can be laid off at any moment because of arbitrary reasons the CEO made up to get their bonus.

Why should anyone be honest when it goes entirely unappreciated and unrewarded? I can completely understand why people would cheat. When companies stop treating workers like garbage then they deserve honesty.


> Why should anyone be honest when it goes entirely unappreciated and unrewarded?

This is a good example of the attitude that I'm describing.

Your question is close to an unthinkable culture shock to me.

In my values and ethics, honesty isn't transactional. It's not something you practice because you expect the same back. It's not something that you regulate and only provide to others that meet some moral bar that you set.

Honesty is just something you do because it's ethically right to do so.

( Nor by the way, is it motivated out of some fear of omnipotent reprisal. )


>In my values and ethics, honesty isn't transactional. It's not something you practice because you expect the same back.

Nor in mine, I'd like to be honest 100% of the time. Unfortunately, we don't live in a perfect world, and practicing morality usually just opens you up to be exploited and stepped on. It doesn't mean you need to be a shitty person, but you also shouldn't be a doormat.

Bad people don't play by the rules. If the good people let them they will take over. The only solution is for everyone to break the rules.


It depends on context. Imagine you're playing poker or some other game where being deceptive gives you an advantage. Do you tell the other poker players your hand because Honesty is the ethically right thing to do? You wouldn't win many games. On the other end of the spectrum are your dealings with your own friends and family. You're expected to be honest with them. I'm not going to try to place the job hunt anywhere particular on this spectrum, but surely it's somewhere in between.

When playing a game, honesty doesn't require you to announce your cards. But it would be considered dishonest to set up hidden cameras to see your opponent's cards.

What the parent poster saying boils down to "They have money so they deserve to be robbed." Funny to hear in an industry where most members get paid multiples of the median wage.

They sortof have a just world fallacy going on, a trap I often fall into. I wish it was.

I think you're more accurate here, fuck them. The reality is I'm fortunate to be an office drone instead of treated as utterly disposable in a gig economy. And if someday AI gets good enough to replace me, I will be replaced.


If "They" here meant me, then far from it. I certainly don't subscribe to the Just World fallacy.

If you're only honest or ethical when you think you'll get some good back from it, then you aren't being honest at all, you're just doing what is convenient, even if you believe any reward is deferred, possibly all the way to a future existence entirely.


Honesty is not something our modern societies optimize for, although I do wish things were different

It's not just the society, it's this particular company who optimizes for it.

The ones paying are in their vast majority the most selfish of them all, for example it would be reasonable to say that Jeff Bezos its one of the most selfish people on the planet, so at the end it doesn't boil down to "Well I can cheat and get away with it, so therefore I should cheat" but more like "Well I can cheat, get away with it and the victim is just another cheater, so therefore I should cheat"

Two wrongs don’t make a right, and it seems weird to me you‘d want to work for such a most-selfish cheater in the first place.

Bezos, Musk, Zuckerberg and many many others do everything in their power to reduce costs including paying less taxes, which includes using tax havens and tax loopholes that they themselves make sure to keep open by "lobbying" politicians, so effectively to work in general means to work for mostly cheaters and there is no way to avoid it, sure you can stay unemployed and stay clean of the moral corruption that entails living in a capitalist system but many don't consider that an option; and is not like buying from them is any better morally speaking, for the exact same reasons.

> "Well it's not illegal"

What's good for the gander.... I promise you they will use AI to vet your application.


> I promise you they will use AI to vet your application.

So?


> What's good for the gander....

I agree with your sentiment. But coming from a generative AI company that says "career development" and "communication" are among their two most popular use cases... That's like a tobacco company telling employees they are not permitted to smoke tobacco

Well, they probably aren't permitted to smoke tobacco indoors.

I honestly fail to see even the irony. "Company that makes hammers doesn't want you to use hammers all the time". It's a tool.

But if I squint, I _can_ see a mean-spirited "haha, look at those hypocrites" coming from people who enjoy tearing others down for no particular reason.


But it's ok for Anthropic's marketing, sales and development teams to push to use case (AI for writing, communication and career development)?

Even when squinting I can't see a genuine argument for why Anthrpoic shouldn't be raked over the coals for their sheer hypocrisy


Do you have an example of the kind of marketing or sales push for communication use cases that you're implying exists? OpenAI totally does have a huge marketing arm, but Anthropic's home page doesn't make it look like they have a very large marketing or sales department at all—it looks like it was designed by the researchers themselves.

"Company that makes hammers for nailing wood together doesn't want candidates to use hammers during their wood-nailing test."

That makes no sense. It would rather be something like this, which actually makes sense:

> Company that makes hammers for nailing wood together doesn't want candidates to use hammers during their hammer-making test.


A brewery telling their employees to not drink the product while at work?

It'd be more analogous if it were a brewery telling interviewees not to drink during the interview

If only there were many jobs that mandate to drink alcohol to enhance your capabilities...

Can I request that they not use AI when evaluating my application, and expect them to respect my wishes? Highly doubtful. Respect is a 2-way street. This is not a collaboration, but a hierarchical mandate in one direction, for protecting themselves from the harms of the very tools they peddle.

It's like an un-inked tattoo artist or a tee-totaling sommelier.

The optics are just bad. Stand behind your product, or accept that you will be fighting ridicule and suspicion endlessly.


It is very sensible position and I think the quote is a bit out of context, but the important part here is who it is coming from -- the company that makes money on both cheating in the job application process (which harms employers) and replacing said jobs with AI, or at least creating another excuse for layoffs (which harms the employees).

In a sense, they poisoned the well and don't want to drink from it now. Looking at it from this perspective justifies (in the eyes of some people at least) said cheating. Something something catalytic converter from the company truck.


It's worse. Anthropic is the one claiming to be a water-enhancing company that you should drink from instead of the posioned well. But they are begging you not to feed them their water.

> if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.

Sorry, the thought process of thinking that using an LLM for a job application, esp. for a field which requests candid input about one's motivation is acceptable, is beyond me.


The application process feels arbitrary and antagonistic to many. If that was how one felt, there would be little reason not to game the system.

I believe in playing the long game, making moves benefiting the short term are unwise choices for the longer term.

Of course, we're in a free world. People are free to do what they wish, and face the consequences, of course.


It's not that there's anything wrong with this in particular. It's just that the general market seems much more optimistic about AI impact, than the AI companies themselves.

They don't want a motivation letter to be written by an LLM (because it's specifically about the personal motivation of the human candidate) - as far as I can see this not reflect either positively or negatively on their level of optimism about AI impact in general.

Companies, especially large, are not interested in fads unless directly invested in them. The higher and steeper the initial wave the bigger disappointment, or at least unfulfilled expectations happen, not always but surprisingly often.

This is just experience and seniority in general, nothing particular about LLMs. For most businesses, I would behave the same.


> If you're looking to score, sure, it's somewhat unethical but it works.

Observation/Implication/Opinion:

Think reciprocal forces and trash TV ethics in both, closed and open systems. The consequences are continuously diminished AND unvarying returns. Professionally as well as personally, in all parties involved. Stimulating, inspiring, motivating factors as well as the ability to perceive and "sense" all degrade. But compensation and cheating continue to work, even though, the quality of the game, the players and their output decreases.

Nothing and nobody is resilient "enough" to the mechanism force - 'counter'-force so you better pick the right strategy. Waiting/Processing for a couple of days, lessons and honest attempts yields exponentially better results than cheating.

Companies should beware of this if they expect results that are qualitatively AND "honestly" safe & sound. This has been ignored in the past decades, which is why we are "here". Too much work, too many jobs, and way too many enabling outs have been lost almost irreversibly, on the individual level as well as in nano-, micro-, and macro-economics.

Applicants using AI is fine but applicants not being able to make that output usefully THEIRS is a problem.


> sshine reply above is coming from a very conflictual mindset. "Can I still use AI and not be caught? Is it cheating? Does it matter if it's cheating?"

First off: Since your opinion is the popular one, "sshine reply below". ;-)

Those are not questions I ask.

Using AI is never cheating, you're optimizing for the wrong thing. Cheating occurs in games.

You can use AI, but if your writing ability is so far from what's expected, AI will just make that obvious (sludge). If it's not far off, and you use it as one might use spellcheck or Grammarly, it's like wearing fake heels at an audition: To get noticed. Just don't come on stilts.


I also don't find anything wrong with their stance. Ironic, sure, but I think to judge someone they need to have filters and in this case, the filter is someone who is able to communicate without AI assistance.

Yes. It's also a realistic view of what AI is actually good for.

I imagine they wouldn't have a problem with using AI to proofread your responses. That feels like fair game.

But... it can't tell you your own thoughts. It has no thoughts. And if it did they certainly aren't your thoughts.


I love it when the "contrarian view" is absolutely not the contrarian view :)


I did check the conversation thread before I commented. At the time, and without looking very carefully, this particular view seemed missing.

> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.

There are two backwards things with this:

1) You can't ask people to not use AI when careful, responsible use is undetectable.

It just isn't a realistic request. You'll have great replies without AI use and great replies with AI use, and you won't be able to tell whether a great reply used AI or not. You will just be able to filter sludge and dyslexia.

2) This is still the "AI is cheating" approach, and I had hoped Anthropic to be thought leaders on responsible AI use:

In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.

If AI is making your final product and you're none the wiser, it didn't really help you, it just made you addicted to it.

Teach a man to fish...


Can't disagree more. Talent is built and perfected upon thousands hours practice, LLMs just make you lazy. One thing people with seniority in the field don't realize, as I guess you are, is that LLMs don't help develop "muscle memory" in young practioners, it just make them miserable, often caged in an infinite feedback loop of bug fixing or trying to untangle a code mess. They may extract some value by using it for studying but I doubt it and only goes so far, when started I remember being able to extract so much knowledge just by reading a book about algorithms, try to reimplement things, break them, and so on. Today I can use an LLM because I'm wise enough and I can spot wrong answers, but still feel becoming a bit lazy.

I strongly agree with this comment. Anecdotal evidence time!

I'm an experienced dev (20 years of C++ and plenty of other stuff), and I frequently work with younger students in a mentor role, e.g. I've done Google Summer of Code three times as a mentor, and am also in KDE's own mentorship program.

In 2023/24, when ChatGPT was looming large, I took on a student who was of course attempting to use AI to learn and who was enjoying many of the obvious benefits - availability, tailoring information to his inquiry, etc. So we cut a deal: We'd use the same ChatGPT account and I could keep an eye on his interactions with the system, so I could help him when the AI went off the rails and was steering him into the wrong direction.

He initially made fast progress on the project I was helping him with, and was able to put more working code in place than others in the same phase. But then he hit a plateau really hard soon after, because he was running into bugs and issues he couldn't get solutions from the AI for and he just wasn't able to connect the dots himself.

He'd almost get there, but would sometimes forget to remove random single lines doing the wrong thing, etc. His mental map of the code was poor, because he hadn't written it himself in that oldschool "every line a hard-fought battle" style that really makes you understand why and how something works and how it connects to problems you're solving.

As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.

To his credit, he eventually realized leaning on ChatGPT was holding him back mentally and he tried to take things slower and go back to API docs and slowly building up his codebase by himself.


It's like when you play World of Warcraft for the first time and you have this character boost to max level and you use it. You didn't go through the leveling phase and you do not understand the mechanics of your character, the behaviour of the mobs, or even how to get to another continent.

You are directly loaded with all the shiny tools and, while it does make it interesting and fun at first, the magic wears off rather quickly.

On the other hand, when you had to fight and learn your way up to level 80, you have this deeper and well-earned understanding of the game that makes for a fantastic experience.


'"every line a hard-fought battle" style that really makes you understand why and how something works'

I totally agree with this and I really like that way of wording it.


This is fascinating. The idea of leveling off in the learning curve is one that I hadn't considered before, although with hindsight it seems obvious. Based on your recollection (and without revealing too many personal details), do you recall any specific areas that caused the struggle? For example, was it a lack of understanding of the program architecture? Was it an issue of not understanding data structures? (or whatever) Thanks for your comment, it opened up a new set of questions for me.

A big problem was that he couldn't attain a mental model of how the code was behaving at runtime, in particular the lifetimes of data and objects - what would get created or destroyed when, exist at what time, happen in what sequence, exist for the whole runtime of the program vs. what's a temporary resource, that kind of thing.

The overall "flow" of the code didn't exist in his head, because he was basically taking small chunks of code in and out of ChatGPT, iterating locally wherever he was and the project just sort of growing organically that way. This is likely also what make the ChatGPT outputs themselves less useful over time: He wasn't aware of enough context to prompt the model with it, so it didn't have much to work with. There wasn't a lot of emerging intelligence a la provide what the client needs not what they think they need.

These days tools like aider end up prompting the model with a repo map etc. in the background transparently, but in 2023/24 that infra didn't exist yet and the context window of the models at the time was also much smaller.

In other words, the evolving nature of these tools might lead to different results today. On the other hand, if it had back then chances are he'd become even more reliant on them. The open question is whether there's a threshold there where it just stops mattering - if the results are always good, does it matter the human doesn't understand them? Naturally I find that prospect a bit frightening and creepy, but I assume some slice of the work will start looking like that.


> "every line a hard-fought battle" style that really makes you understand why and how something works

Absolutely true. However:

The real value of AI will be to *be aware* when at that local optimum, and then - if unable to find a way forward - at least reliably notify the user that that is indeed the case.

Bottom line, the number of engineering “hard thought battles” is finite, and should be chosen very wisely.

The performance multiplier that LLM agents brought changed the world. At least as the consumer web did in the 90s, and there will be no turning back.

This is like a computer company around 1980, would be hiring engineers but forbade access to computers for some numerical task.

Funny, it reminds me the reason Konami MSX1 games look like they do, compared to the most of the competition: having access to superior development tools - their HP hardware emulator workstations.

If you are unable to come up with a filter for your applicants that is able to detect your own product, maybe you should evolve. What about asking an AI how to solve this? ;)


I have a feeling that "almost getting there" will simply become the norm. I have seen a lot of buggy and almost but not exactly right applications, processes and even laws that people simply have to live with.

If US can be the worlds biggest economy while having an opiod epidemy and writing paper cheques, if Germany can be Europes manufacturing hub while using faxes, sure we as a society can live in the unoptimal state of everything digital being broken 10% of the time insteaf of hald percent


Using faxes is much more streamlined than the more modern Scan, Email, Print process.

Only if you're starting with paper?

Years back I worked somewhere where we had to PDF documents to e-fax them to a supplier. We eventually found out that on their end it was just being received digitally and auto-converted to PDF.

It was never made paper.. So we asked if we could just email the PDF instead of paying for this fax service they wanted.

They said no.


There was a comment here on HN, I think, that explained why enterprises spend so much money on garbage software. It turned out that the garbage software was a huge improvement on what they did before, so it was still a savings in time and money and easier than a total overhaul.

I wonder what horror of process and machinery the supplier used before the fax->PDF process.


I once worked on a janky, held-together-with-duct-tape-and-bubblegum distributed app written in Microsoft Access. Yes, Microsoft Access for everything, no central server, no Oracle, no Postgres. Data was shared between client and server by HTTP downloads of zipped-up Access .mdb files which got merged into the clients' main database.

The main architect of the app told me, "Before we came along, they wer doing all this with Excel spreadsheets. This is a vast improvement!"


there shouldn't ever be a print or scan step in the pipeline.

Found the german!

This seems to be the way of things. Oral traditions were devastated by writing, but the benefit is another civilization can hold on to all your knowledge while you experience a long and chaotic dark age so you don't start from 0 when the Enlightenment happens.

What about people who don't have access to a mentor? If not AI then what is their option? Is doing tutorials on your own a good way to learn?

Write something on your own. When stuck, consult the docs, Google the error message, and ask on StackOverflow (in this order).

There's no royal road to learning.


So, so, so many people have learnt to code on their own without a mentor. It requires a strong desire to learn and perseverance but it’s absolutely possible.

That you can learn so much about programming from books and open source and trial and error has made it a refuge for people with extreme social anxiety, for whom "bothering" a mentor with their questions would be unthinkable.

Not sure! My own path was very mentor-dependent. Participating in open source communities worked for me to find my original mentors as well. The other participants are incentivized to mentor/coach because the main thing you're bringing is time and motivation--and if they can teach you what you need to know to come back with better output while requiring less handholding down the road, their project wins.

It's not for everyone because open source tends to require you to have the personality to self-select goals. Outside of more explicit mentor relationships, the projects aren't set up to provide you with a structured curriculum or distribute tasks. But if you can think of something you want to get done or attempt in a project, chances are you'll get a lot of helping hands and eager teachers along the way.


Mostly by reading a good book to get the fundamentals down, then taking on a project to apply the knowledge and supplement the gap with online ressource. There are good books and nice open source projects out there. You can get far with these by just studying them with determination. Later you can go on the theoretical and philosophical part of the field.

How do you know what a good book is? I've seen recommendations in fields I'm knowledgeable about that were hot garbage. Those were recommendations by reputed people for reputed authors. I don't know how a beginner is supposed to start without trying a few and learning some bad habits.

If you're a beginner, almost any book by a reputable publisher is good. The controversial ideas start at the upper intermediary or advanced level. No beginner knows enough to argue about clean code or the gang of four book.

There is no 'learning' in the abstract. You learn something. Doing tutorials teach you how to do the thing you do in them.

It all comes down to what you wanna learn. If you want to acquire skills doing the things you can ask AI to do, probably a bad idea to use them. If you want to learn some pointers on a field you don't even know what key words are relevant to take to a library, LLMs can help a lot.

If you wanna learn complex context dependent professional skills, I don't think there's an alternative to an experienced mentor.


Failing for a bit, thinking hard and then somehow getting to the answer - for me it was usually tutorials, asking on stackoverflow/forums, finding a random example on some webpage.

The fastest way for me to learn something new is to find working code or code that I can kick for a bit until it compiles/runs. Often I'll comment out everything and make it print hello world, and then from there try to figure out what the essential bits I need to bring back in, or simplify/mock, etc, until it works again.

I learn a lot more by forming a hypothesis "to make it do this, I need that bit of code, which needs that other bit that looks like it's just preparing this/that object" - and the hypothesis gets tested every time I try to compile/run.

Nowadays I might paste the error into chatgpt and it'll say something that will lead me a step or two closer to figuring out what's going on.


Why is modifying working code you didn't write better than having an AI help write code with you? Is it that the modified code doesn't run until you fix it? It still bypasses the 'hard won effort' criteria though?

I forgot to say, the aim is usually to integrate it into a bigger project that I'm writing by myself. The working code is usually for interfacing to libraries I didn't write - I could spend a year reading every line of code for a given library and understanding everything it does, and then realise it doesn't do what I want. The working code is to see what it can do first, or to kick it closer to what I want - only when I know it can do it will I spend the time to fully understand what's going on. Otherwise a hundred lifetimes wouldn't be enough to go through the amount freely available crapware out there.

> As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.

So as a mentor, you totally talked directly with them about what excites them, tied it to their work, encouraged them to talk about their frustrations openly, helped them develop resilience by showing them towards setbacks are part of the process, and helped give them a sense of purpose and see how their work contributes to a bigger picture, to directly address the side effects of being a human with emotions which could have happened regardless of the tool they used, and didn't just let them flounder because of your personal feelings about a particular tool they used, right? Or do you only mentor winners, and you've never had a mentee hit a wall before LLMs were invented and never had to help anyone through some of the impacts from emotional lows that an immature intern might need help from a mentor to work through.


> listless poking in the mud

https://xkcd.com/1838/


Use LLM. But do not let it be the sole source of your information for any particular field. I think it's one of the most important disciplines the younger generation - to be honest, all generations - will have to learn.

I have a rule for myself as a non-native English speaker: Any day I ask LLMs to fix my English, I must read 10 pages from traditionally published books (preferably pre-2023). Just to prevent LLM from dominating my language comprehension.


I use LLMs as a translation tool, and make sure to generate JSON flashcards.

Sometimes it is more important to get a point across in another language than it is to learn that language. Computers being automatable, you can use it to create a backlog for when you skipped learning so that you can maintain some control of your habit of not learning what you're saying.


You perfectly encapsulated my view on this. I'm utterly bewildered with people who take the opposing position that AI is essentially a complete replacement for the human mind and you'd be stupid not to fully embrace it as your thought process.

This is a straightforward position and it's the one I hold but I had to reply to thank you for stating it so succinctly.

I drove cars before the sat nav systems and when I visited somewhere, I'd learn how to drive to there. The second drive would be from memory. However, as soon as I started relying on sat navs, I became dependent on them. I can not drive to a lot of places that I visited more than once without a sat nav these days (and I'm getting older, that's a part of it too).

I wonder if the same thing will happen with coding and LLMs.


On a roadtrip ten years back we chose to navigate by map and compass, and avoid highways.

With sat nav I don't even try to read the exit signs; I just follow the blue line. It takes me 10-20 drives somewhere before I have the muscle memory, and I never made an active mental effort.

Going somewhere by public transportation or foot, e.g. a large homogenic parking lot complex, I consciously make an effort to take mental pictures so I can backtrack or traverse perfectly the second time; in spite of that being mentally challenging, it's still the easiest way I have.

I cannot assemble the hardware that I write code for. This is in spite of having access to both the soldering equipment, the parts and the colleagues who are willing to help me.

At some point all skills become abstract; efficiency is traded for flexibility when you keep doing the same thing for a very long time.

I can still drive a stick shift, but maybe not in 20 years.


Progress is built on top of an infinite number of dependences.

In many ways people that don't use sat nav are at a disadvantage: real time traffic and redirection, high precision ETA, trip logging, etc.


I can even feel it in my own coding. I've been coding almost my entire life all the way back to C64 Basic and ever since I am relying on Copilot for most of my regular work I can feel my non AI assisted coding skills get rusty.

That's a scary thing


I hear this argument all the time, and I think “this is exactly how people who coded in assembly back in the day thought about those using higher level programming languages.”

It is a paradigm shift, yes. And you will know less about the implementation at times, yes. But will you care when you can deploy things twice, three times, five times as fast as the person not using AI? No. And also, when you want to learn more about a specific bit of the AI written code, you can simply delve deep into it by asking the AI questions.

The AI right now may not be perfect, so yes you still need to know how to code. But in 5 years from now? Chances are you will go in your favorite app builder, state what you want, tweak what you get and you will get the product that you want, with maybe one dev making sure every once in a while that you’re not messing things up - maybe. So will new devs need to know high level programming languages? Possibly, but maybe not.


1. We still teach assembly to students. Having a mental model of what the computer is doing is incredibly helpful. Every good programmer has such a model in my experience. Some of them learned it by studying it explicitly, some picked it up more implicitly. But the former tends to be a whole lot faster without the stop on the way where you are floundering as a mid level with a horribly incorrect model for years (which I’ve seen many many times).

2. Compilers are deterministic. You can recompile the source code and get the same assembly a million times.

You can also take a bit of assembly then look at the source code of the compiler and tell exactly where that assembly came from. And you can change the compiler to change that output.

3. Source code is written in a formal unambiguous language.

I’m sure LLMs will be great at spitting out green field apps, but unless they evolve to honest to goodness AGI, this won’t get far beyond existing low code solutions.

No one has solved or even proposed a solution for any of these issues beyond “the AI will advance sufficiently that humans won’t need to look at the code ever. They’ll never need to interact with it in any way other than through the AI”.

But to get to that point will require AGI and the AI won’t need input from humans at all, it won’t need a manager telling it what to build.


The point of coding is not to tell a machine what to do.

The point of coding is to remove ambiguity from the specs.

"Code" is unambiguous, deterministic and testable language -- something no human language is (or wants to be).

LLMs today make many implementation mistakes where they confuse one system with another, assume some SQL commands are available in a given SQL engine when they aren't, etc. It's possible that these mistakes will be reduced to almost zero in the future.

But there is a whole other class of mistakes that cannot be solved by code generation -- even less so if there's nobody left capable of reading the generated code. It's when the LLM misunderstands the question, and/or when the requirements aren't even clear in the head of the person writing the question.

I sometimes try to use LLMs like this: I state a problem, a proposed approach, and ask the LLM to shoot holes in the solution. For now, they all fail miserably at this. They recite "corner cases" that don't have much or anything to do with the problem.

Only coding the happy path is a recipe for unsolvable bugs and eventually, catastrophe.


You seem so strong opinionated and sure what the future holds for us, but I must remember you though, that in your example "from assembly to higher level programming languages" the demand for programmers didn't go down, went up, and as companies were able to develop more, more development and more investments were made, more challenges showed up, new jobs were invented and so on... You get where I'm going... The thing I'm questioning is how much lazy new technologies make you, many programmers even before LLMs had no idea how a computer works and only programmed in higher level languages, it was a disaster before with many people claming software was bad and industry going down a road where software quality matters less and less. Well that situation turbo boosted by an LLMs because "doesn't matter i can deploy 100x times a day" disrupting user experience imo won't led us far

I assume you disagree with there being such a thing as "responsible AI use", because besides that, I completely agree with everything you write, including my own experience of "spot wrong answers, but feel becoming lazy".

So I suppose you think that becoming lazy is always irresponsible?

It seems to me, then, that either the Amish are right, or there is a gray zone.

Being a CS teacher, my use of "responsible AI use" probably comes from a place of need: If I can say there is responsible AI use, I can pull the brake maybe a little bit for learners. It seems like LLMs in all their versatility are a great disservice to students. I'm not convinced it's entirely bad, but it is overwhelmingly bad for weak learners.


I think the same kind of critical thinking that was required to reimplement and break algorithms must now be used to untangle AIs answers. In that way, it's a new skill, with its own muscle memory. Previously learnt skills like debugging segfaults slowly become less relevant.

AI is a tool, and tool use is not lazy.

I think it's a lot more complicated than that. I think it can be used as a tool for people who already have knowledge and skills, but I do worry how it will affect people growing up with it.

Personally I see it more like going to someone who (claims) to know what they're doing and asking them to do it for me. I might be able to watch them at work and maybe get a very general idea of what they're doing but will I actually learn something? I don't think so.

Now, we may point to the fact that previous generations railed at the degeneration of youth through things like pocket calculators or mobile phones but I think there is a massive difference between these things and so-called AI. Where those things were tools obligatorily (if you give a calculator to someone who doesn't know any formulae it will be useless to them), I think so-called AI can just jump straight to giving you the answer.

I personally believe that there are necessary steps that must be passed through to really obtain knowledge and I don't think so-called AI takes you through those steps. I think it will result in a generation of people with markedly fewer and shallower skills than the generations that came before.


I think you are both right.

AI will let some people conquer skills otherwise out of their reach, with all the pros and cons of that. It is exactly like the example someone else brought up of not needing to know assembly anymore with higher level languages: true, but those who do know it and can internalize how the machines operate have an easier time when it comes to figuring out the real hard problems and bugs they might hit.

Which means that you only need to learn machine language and assembly superficially, and you have a good chance of being a very good programmer.

However, where I am unsure how the things will unfold is that humans are constantly coming up with different programming languages, frameworks, patterns, because none of the existing ones really fit their mental model or are too much to learn about. Which — to me at least — hints at what I've long claimed: programming is more art than science. With complex interactions between a gazillion of mildly incompatible systems, even more so.

As such, for someone with strong fundamentals, AI tools never provided much of a boon to me (yet). Incidentally, neither did StackOverflow ever help me: I never found a problem that I struggled with that wasn't easily solved with reading the upstream docs or upstream code, and when neither was available or good enough, SO was mostly crickets too.

These days, I rarely do "gruntwork" programming, and only get called in on really hard problems, so the question switches to: how will we train the next generation of software engineers who are going to be called in for those hard problems?

Because let's admit it, even today, not everybody can handle them.


It is if the way to learn is doing it without a tool. Imagine using a robot to lift weights if you want to grow your own muscle mass. "Robot is a tool"

"Growing your own muscle mass" is an artificial goal that exists because of tools. Our bodies evolved under the background assumption that daily back-breaking labor is necessary for survival, and rely on it to stay in good operating conditions. We've since all but eliminated most of that labor for most people - so now we're forced to engage in otherwise pointless activity called "exercise" that's physically hard on purpose, to synthesize physical exertion that no longer happens naturally.

So obviously, your goal is strictly to exert your body, you have to... exert your body. However, if your goal is anything else, then physical effort is not strictly required, and for many people, for many reasons, is often undesirable. Hence machines.


And guess what, people's overall health and fitness have declined. Obesity is at an all time high. If you're in the US, there is a 40% chance you are obese. Your body likely contains very little muscle mass, you are extremely likely to die of side effects of metabolic syndrome.

People are seeing the advent of machines to replace all physical labor and transportation, not gradually like in the 20th century, but withing the span of a decade going from the average physical exertion of 1900 to the average modern lack of physical exertion, take a car everyday, do no manual labor do no movement.

They are saying that you need exercise to replace what you are losing, you need to train your body to keep it healthy and can't just rly on machines/robots to do everything for them because your body needs that exertion - and your answer is to say "now that we have robots there is no need to exercise even for exercise sake". A point that's pretty much wrong as modern day physical health shows.

https://en.m.wikipedia.org/wiki/Metabolic_syndrome


>And guess what, people's overall health and fitness have declined.

Have you seen what physical labor does to a man's body? Go to a developing country to see it. Their 60 year olds look like our 75 year olds.

Sure, we're not as healthy as we could be with proper exercise and diet. But on the long run, sitting on your butt all day is better for your body than hard physical labor.


You've completely twisted what the parent post was saying, and I can't but laugh out loud at claims like:

> there is a 40% chance you are obese.

Obesity is not a random variable — "darn, so unlucky for me to have fallen in the 40% bucket of obese people on birth": you fully (except in rare cases) control the factors that lead to obesity.

A solution to obesity is not to exercise but a varied diet, and eating less of it to match your energy needs (or be under when you are trying to lose weight). While you can achieve that by increasing your energy needs (exercise) and maintain energy input, you don't strictly have to.

Your link is also filled with funny "science" like the following:

> Neck circumference of more than 40.25 cm (15.85 in) for men ... is considered high-risk for metabolic syndrome.

Darn, as a 195cm / 6'5" male and neck circumference of 41cm (had to measure since I suspected I am close), I am busted. Obviously it correlates, just like BMI does (which is actually "smarter" because it controls for height), but this is just silly.

Since you just argued a point someone was not making: I am not saying there are no benefits to physical activity, just that obesity and physical activity — while correlated, are not causally linked. And the problems when you are obese are not the same as those of being physically inactive.


Hate to disagree with you over GP, with whose comment I mostly disagree too, but:

> you fully (except in rare cases) control the factors that lead to obesity.

Not really, unless you're a homo economicus rationalus and are fully in control of yourself, independent of physical and social environment you're in. There are various hereditary factors that can help or hinder one in maintaining their weight in times of plenty, and some of the confounding problems are effectively psychological in nature, too.

> A solution to obesity is not to exercise but a varied diet, and eating less of it to match your energy needs

I've seen reported research bounce back and forth on this over the years. Most recent claim I recall is that neither actually does much directly, with exercise being more critical than diet because it helps compensate for the body oversupplying energy to e.g. the immune system.

I mean, obviously "calories in, calories out" is thermodynamically true, but then your body is a dynamic system that tries to maintain homeostasis, and will play all kinds of screwy games with you if you try to cut it off energy, or burn it off too quickly. Exercise more? You might start eating more. Eat less? You might start to move less, or think slower, or starve less essential (and less obvious) aspects of your body. Or induce some extreme psychological reactions like putting your brain in a loop of obsessive thinking about food, until you eat enough at which point the effects just switches off.

Yes, most people have a degree of control over it. But that degree is not equally distributed - some people play in "easy mode", some people play in "god mode", helped by strong homeostasis maintaining healthy body weight, some people play in "hard mode"... and then some people play in "nightmare mode" - when body tries to force you to stay below healthy weight.


> I've seen reported research bounce back and forth on this over the years. Most recent claim I recall is that neither actually does much directly, with exercise being more critical than diet because it helps compensate for the body oversupplying energy to e.g. the immune system.

Hah, I've understood what I think is the same study you refer to as exactly that exercise does not help because people who've walked 60km a day regularly did not get "sick" because in people who did not "exercise" that much, excess energy was instead used on the immune system responding too aggressively when it didn't need to — basically, you'll use the same energy, just for different purposes. Perhaps I am mixing up the studies or my interpretation is wrong.

And there are certainly confounding factors to one "controlling" their food intake, but my point is that it's not really random with a "40% chance" of you eating so much to become obese.

Also note that restoring the equilibrium (healthy weight, whatever that's defined to be) is more prone to the factors you bring up, than maintaining it once there — as in, rarely people become obese and continue becoming more and more obese, they do reach a certain equilibrium but then have a hard time going through food/energy deficiency due to all the heavy adaptations the body and mind do to us.

And yes, those in "nightmare mode" have their own struggles, and because of such focus on obesity, they are pretty much disregarded in any medical research.

My "adaptation" for keeping a pretty healthy weight is that I am lazy to prepare food for myself, and then it only comes down to not having too many snacks in the house — trickier with kids, esp if I am skipping a family meal (I'll prepare enough food for them, so again, need to try not to eat the left-overs :D). So I am fully cognizant that it's not the same for everyone, but it's still definitely not "40% chance" — it's a clear abuse of the statistical language.


Said no one with ever with a even remote knowledge in healthy benefits of fitness.

Fitness does not equal physical exercise.

It could be a simple lifestyle that makes you "fit" (lots of walking, working a not-too-demanding physical job, a physical hobby, biking around...).

The parent post is saying that technological advance has removed the need for physical activity to survive, but all of the gym rats have come out of the woodwork to complain how we are all going to die if we don't hit the gym, pronto.


What on earth are you talking about?

- Physical back-breaking work has not been eliminated for most people.

- Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people (arguable for most people as it is a mammalian trait) ergo it is not undesirable. UK NHS calls physical exercise essential.


> Physical back-breaking work has not been eliminated for most people.

I said most of it for most people specifically to avoid the quibble about mechanization in poorest countries and their relative population sizes.

> Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people

I envy them. I'm not one of them.

> ergo it is not undesirable

Again, I specifically said "and for many people, for many reasons, is often undesirable" as to not have to spell out the obvious: you may like the exercise benefits of a physically hard work, but your boss probably doesn't - reducing the need for physical exertion reduces workplace injuries, allows worker to do more for longer, and opens up the labor pool to physically weaker people. So even if people only ever felt pleasure from physical exertion, the market would've been pushing to eliminate it anyway.

> UK NHS calls physical exercise essential.

They wouldn't have to if people actually liked doing it.


Incidentally, there are things like these: https://pmc.ncbi.nlm.nih.gov/articles/PMC6104107/

Your favourite online store is full of devices that'd help there, and they are used in physical therapy too.


When you go for a job as a forklift operator, nobody asks you to demonstrate how you can manually carry a load of timber.

Equally, if you just point to your friend and say "that's Dave, he's gonna do it for me", they won't give you the job. They'll give it to Dave instead.

That much is true, but I've seen a forklift operator face a situation where pallet of products fell apart and heavy items ended up on the floor. Guess who was in charge of picking them up and manually shelving them?

You forgot a second sentence that completes the logic chain. Yes, "some tools are useful for some work", so what? That wasn't the original claim...

The claim was that it's lazy to use a tool as a substitute for learning how to do something yourself. But when the tool entirely obviates the need for doing the task yourself, you don't need to be able to do it yourself to do the job. It doesn't matter if a forklift driver isn't strong enough to manually carry a load, similarly once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.

> once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.

Once AI is that good, the developer won't have a job any more.


The whole question is if the AI will ever get that good?

All evidence so far points to no (just like with every tool — farmers are still usually strong men even if they've got tractors that are thousands of times stronger than any human), but that still leaves a bunch of non-great programmers out of a job.


Tool use is fine, when you have the education and experience to use the tools properly, and to troubleshoot and recover when things go wrong.

The use of AI is not just a labour saving device, it allows the user to bypass thinking and learning. It robs the user of an opportunity to grow. If you don't have the experience to know better it may be able to masquerade as a teacher and a problem solver, but beyond a trivial level relying on it is actively harmful to one's education. At some point the user will encounter a problem that has no existing answer in the AI's training dataset, and come to realise they have no real foundation to rely on.

Code generative AI, as it currently exists, is a poisoned chalice.


The point he's making is, we still have to learn to use tools no? There still had to he some knowledge there or else you're just sat sifting through all the crap the AI spits out endlessly for the rest of your life. The op wrote his comment like it's a complete replacement rather than an enhancement.

You could similarly consider driving a car as "a tool that helps me get to X quicker". Now tell me cars don't make you lazy.

Tools can be crutches — useful but ultimately inhibitory to developing advanced skill.

Tools help us to put layers of abstraction between us and our goals. when things become too abstracted we lose sight of what we're really doing or why. Tools allow us to feel smart and productive while acting stupidly, and against our best interests. So we get fascism and catastrophic climate change, stuff like that. Tools create dependencies. We can't imagine life without our tools.

"We shape our tools and our tools in turn shape us" said Marshall McLuhan.


Using the wrong tool for the job required isn't lazy but it may be dangerous, inefficient and ultimately more expensive.

For learning it can very well be. And also it really depends on the tool and task. Calculator is fine tool. But symbolic solver might be a few steps too far. If you don't already understand the process. And possibly the start and end points.

Problem with AI is that it is often black box tool. And not even deterministic one.


AI as applied today is pretty deterministic. It does get retrained and tuned often in most common applications like ChatGPT, but without any changes, you should expect a deterministic answer.

Copying and pasting from stack overflow is a tool.

It’s fine to do in some cases, but it certainly gets abused by lazy incurious people.

Tool use in general certainly can be lazy. A car is a tool, but most people would call an able bodied person driving their car to the end of the driveway to get the mail lazy.


Tool use can make you lazy if you're not careful.

AI companies don't think so, is clearly the implication.

Let me give you an example from yesterday. I was learning tailwind and had a really long class attribute on a div which I didn't like. I wanted to split it and found a way to do it using my JavaScript framework (the new way to do it was suggested by deepseek). When I started writing by hand the list of classes in the new format copilot gave me an auto complete suggestion after I wrote the first class. I pressed tab and it was done.

I showed this to my new colleague who is a bit older than me and sort of had similar attitudes as you. He told me he can do the same with some multi cursor shenanigans and I'll be honest in that I wasn't interested in his approach. Seems like he would've taken more time to solve the same problem even though he had superior technique than me. He said sure it takes longer but I need to verify by reading the whole class list and that's a pain but I just reloaded the page and it was fine. He still wasn't comfortable with me using copilot.

So yes, it does make me lazier but you could say the same about using go instead of C or any higher level abstraction. These tools will only get better and more correct. It's our job to figure out where it is appropriate to use them and where it isn't. Going to either extremes is where the issue is


Remember though that lazyness, as I learned in computing, is kinda "doing something later": you might have pushed the change/fix faster than your senior fellow programmer, but you still need to review and test that change right? Maybe the change you're talking about was really trivial and you just needed to refresh your browser to see a trivial change, but when it's not, being lazy about a change will only gets you suffer more when reviewing a pr and testing the non trivial change working for thousands customers with different devices

The problem is he wasn't comfortable with my solution even though it was clearly faster and it could be tested instantly. It's a mental block for him and a lot of people in this industry.

I don't advocate blindly trusting LLMs. I don't either and of course test whatever it spits out.


Testing usually isn’t enough if you don’t understand the solution in the first place. Testing is a sanity check for a solution that you do understand. Testing can’t prove correctness, it can only rind (some) errors.

LLMs are fine for inspiration in developing a solution.


I wouldn’t say it’s laziness. The thing is that every line of code is a burden as it’s written once, but will be read and edited many times. You should write the bare amount that makes the project work, then make it readable and then easily editable (for maintenance). There are many books written about the last part as it’s the hardest.

When you take all three in consideration, an llm won’t really matter unless you don’t know much about the language or the libraries. When people goes on about Vim or Emacs, it’s just that it makes the whole thing go faster.


Talent is innate. Proficiency requires practice.

100%. Learning is effort. Exercising is effort. Getting better at anything is effort. You simply can't skip the practice that's the reality. If you want to *learn* something from scratch AI will only help with answers but you still need to put the time in to understand it.

Learning comes from focus and repetition. Talent comes from knowing which skill to use. Using AI effectively is a talent. Some of us embrace learning new skills while others hold onto the past. AI is here to stay, sorry. You can either learn to adapt to it or you can slowly die.

The argument that AI is bad and anyone who uses it ends up in a tangled mess is only your perspective and your experience. I’m way more productive using AI to help me than I ever was before. Yes, I proofread the result. Yes, I can discern a good response from a bad one.

AI isn’t a replacement for knowing how to code, but it can be an extremely valuable teacher to those orgs that lack proper training.

Any company that has the position that AI is bad, and lacks proper training and incentives for those that want to learn new skills, isn’t a company I ever want to work for.


I think GP is basically saying the same as you.

> LLMs just make you lazy.

Yeah, and I'd like to emphasize that this is qualitatively different from older gripes such as "calculators make kids lazy in math."

This is because LLMs' have an amazing ability to dream up responses stuffed with traditional signals of truthfulness, care, engagement, honesty etc... but that ability is not matched by their chances of dreaming up answers and ideas that are logically true.

This gap is inevitable from their current design, and it means users are given signals that it's safe for their brains to think-less-hard (skepticism, critical analysis) about what's being returned at the same moments when they need to use their minds the most.

That's new. A calculator doesn't flatter you or pretend to be a wise professor with a big vocabulary listening very closely to your problems.


You sound like the sort of person of old who “Why would you use the internet. Look in an encyclopaedia. ‘Google it’ is just lazy”

This trope is unbecoming of anyone sensible.


That's not what my comment implies. I'm just saying that relying solely on LLMs makes you lazy, like relying just on google/stackoverflow whatever, it doesn't shift you from a resource that can be layed off to a resource that can't. You must know your art, and use the tools wisely

It's 2025, not 2015. 'google it and add the word reddit' is a thing. For now.

Google 'reflections on trusting trust'. Your level of trust in software that purports to think for you out of a multi-gig stew of word associations is pretty intense, but I wouldn't call it pretty sensible.


What on earth does that drivel even say. Did you generate this from a heavily quantised gpt2?

Spot on. I'm not A Programmer(TM), but I have dabbled in a lot of languages doing a lot of random things.

Sometimes I have qwen2.5-coder:14b whip up a script to do some little thing where I don't want to spend a week doing remedial go/python just to get back to learning how to write boilerplate. All that experience means I can edit it easily enough because recognition kicks in and drags the memory kicking and screaming back into the front.

I quickly discovered it was essentially defaulting to "absolute novice." No error handlers, no file/folder existence checking, etc. I had to learn to put all that into the prompt.

>> "Write a python script to scrape all linked files of a certain file extension on a web page under the same domain as the page. Follow best practices. Handle errors, make strings OS-independent, etc. Be persnickety. Be pythonic."

Here's the output: https://gist.github.com/kyefox/d42471893de670a2a4179482d3c8b...

I'm far from an expert and my memory might be foggy, but that looks like a solid script. I can see someone with less practice doing battle with debuggers trying the first thing that comes out without all the extra prompting hitting errors and not having any clue.

For example: I wrote a thing that pulled a bunch of JSON blobs from an API. Fixing the "out of handles" error is how I learned about file system and network default limits on open files and connections, and buffering. Hitting stuff like that over and over was educational and instilled good habits.


Not lazy per se, but you stop thinking and start relying on AI to think for you.

And you must use that brain muscles otherwise your skills became to degrade fast, like really fast.

As long as you ask llm What - or high level How - you should be good.

As soon as you ask for (more than trivial) code or solutions - you start losing your skill and value as a developer.


Feeling "Lazy" is just an emotion, which to me has nothing to do with how productive you are as a human. In fact the people not feeling lazy but hyped are probably more effective and productive. You're just doing this to yourself because you have assumptions on how a productive/effective human should function. You could call it "stuck in the past".

I have the idea that there are 2 kinds of people, those avidly against AI because it makes mistakes (it sure does) and makes one lazy and all other kinds of negative things, and those that experiment and find a place for it but aren't that vocal about it.

Sure you can go too far, I've heard someone in Quality Control Proclaim "ChatGPT just knows everything, its saves me so much time!" To which I asked if they heart about hallucinations and they hadn't, they'd just been copying whatever it said into their reports. Which is certainly problematic.


> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.

At least in theory that’s not what homework is. Homework should be exercises to allow practicing whatever technique you’re trying to learn, because most people learn best by repeatedly doing a thing rather than reading a few chapters of a book. By applying an LLM to the problem you’re just practicing how to use an LLM, which may be useful in its own right, but will turn you into a one trick pony who’s left unable to do anything they can’t use an LLM for.


What if you use it to get unstuck from a problem? Then come back and learn more about what you got stuck on.

That seems like responsible use.


In the context of homework, how likely is someone still in school, who probably considers homework to be an annoying chore, going to do this?

I can't really see an optimistic long-term result from that, similar to giving kids an iPad at a young age to get them out of your hair: shockingly poor literacy, difficulty with problem solving or critical thinking, exacerbating the problems with poor attention span that 'content creators' who target kids capitalise on, etc.

I'm not really a fan of the concept of homework in general but I don't think that swapping brain power with an OpenAI subscription is the way to go there.


If you use it in that way then fine. I suspect both you and I knew that’s not what the GP meant though.

Kinda struck a nerve because I was using an llm to help get me unstuck just as I saw that comment

But how likely is that?

It was the same way I think a lot of us used textbooks back in the day. Can’t figure out how to solve a problem, so look around for a similar setup in the chapter.

If AI is just a search over all information, this makes that process faster. I guess the downside is there was arguably something to be learned searching through the chapter as well.


Homework problems are normally geared to the text book that is being used for the class. They might take you through the same steps, developing the knowledge in the same order.

Using another source is probably going to mess you up.


> something to be learned searching through the chapter as well

Learning to mentally sort through and find links between concepts is probably the primary benefit of homework


Depends. Do they care about the problem? If so, they'll quickly hit diminishing returns on naive LLM use, and be forced to continue with primary sources.

doesn't sound much different than googling and finding a snippet that gets you unstuck. This might be a shortcut to the same thing.

But they asked how likely it is. My guess is it's a pretty small fraction of problems where you need to get unstuck.

fair enough

Um, get with the times, luddite. You can use an LLM for everything, including curing cancer and fixing climate change.

(I still mentally cringe as I remember the posts about Disney and Marvel going out of business because of Stable Diffusion. That certainly didn't age well.)


AI did my gym workout, still no muscles.

It would be great if all technologies freed us and gave us more time to do useful or constructive stuff instead. But the truth is, and AI is a very good example of this, a lot of these technologies are just making people dumb.

I'm not saying they are essentially bad, or that they are not useful at all, far from that. But it's about the use they are given.

> You can use an LLM for everything, including curing cancer and fixing climate change.

Maybe, yes. But the danger is rather in all the things you no longer feel you have a need to do, like learning a language, or how to properly write, or read.

LLM for everything is like the fast-food of information. Cheap, unhealthy, and sometimes addicting.


> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.

Well, no. Homework is an aid to learning and LLM output is a shortcut for doing the thinking and typing yourself.

Copy and pasting some ChatGPT slop into your GCSE CS assignment (as I caught my 14yo doing last night…) isn’t learning (he hadn’t even read it) - it’s just chucking some text that might be passable at the examiner to see if you can get away with it.

Likewise, recruitment is a numbers game for under qualified applicants. Using the same shortcuts to increase the number of jobs you apply for will ultimately “pay off” but you’re only getting a short term advantage. You still haven’t really got the chops to do the job.


I've seen applicants use AI to answer questions during the 'behavioral' Q&A-style interviews. Those applicants are 'cheating', and it defeats the whole purpose as we want to understand the candidate and their experience, not what LLMs will regurgitate.

Thankfully it's usually pretty easy to spot this so it's basically an immediate rejection.


If the company is doing behavioral Q&A interviews, I hope they're getting as many bad applicants as possible.

Adding a load of pseudo-science to the already horrible process of looking for a job is definitely not what we need.

I'll never submit myself to pseudo-IQ tests and word association questions for a job that will 99.9% of the time ask you to build CRUD applications.

The lengths that companies go to avoid doing a proper job at hiring people (one of the most important jobs they need to do) with automatic screening and these types of interviews is astonishing.

Good on whoever uses AI for that kind of shit. They want bullshit so why not use the best bullshit generators of our time?

You want to talk to me and get a feeling about how I'd behave? That's totally normal and expected. But if you want to get a bunch of written text to then run sentiment analysis on it and get a score on how "good" it is? Screw that.


I think there's a misunderstanding here. I just want to talk to the applicant and see what they think about working with designers, their thoughts on learning golang as a javascript developer, or how they've handled a last minute "high priority" project.

I assumed you meant behavioral interviews where companies try to "match" you with personality types through word association and other questions. Pseudo-science that I've seen used in multiple supposed tech companies.

When I did hiring, the questions you're referring to were normal interview questions. I had never categorized them as behavioral, but I might be ignorant to the slang.

Answering those kind of questions with AI sounds... absurd? Heh, I guess I'm not sure how you gain anything from AI answering those. Maybe time if you're automating everything? Still, I'd ask those questions in a video call not through email as they're a great opportunity to learn about the person in front of you.


You could reasonably argue that they're not cheating, indeed they're being very behaviorally revealing and you do understand everything you need to understand about them.

Too bad for them, but works for you…

I'm imagining a hiring workflow, for a role that is not 'specifically use AI for a thing', in which there is no suggestion that you shouldn't use AI systems for any part of the interview. It's just that it's an auto-fail, and if someone doesn't bother to hide it it's 'thanks for your time, bye!'.

And if they work to hide it, you know they're dishonest, also an auto-fail.


Consider the case where there is a non Native English speaker and they use AI to misrepresent the standard of written English communication.

Assume their command of English is insufficient to get the job ultimately. They've just wasted their own time and the company's time in that situation.

I imagine Anthropic is not short of applicants...


>Hey Claude, translate this to Swahili from English. Ok, now translate my response from Swahili to English. Thanks.

We're close to the point where using a human -> stt -> llm -> tts -> human pipeline you can do real time high quality bi directional spoken translation on a desktop.


Why not just send the Swahili and let them MTL on the other end? At least then they have the original if there’s any ambiguity.

I’ve read multiple LLM job applications, and every single time I’d rather have just read the prompt. It’d be a quarter of the length and contain no less information.


Homework is a proxy for your retention of information and a guide to what you should review. That somehow schools started assigning grades to it is as nonsensically barbaric as public bare ass caning was 80 years ago and driven by the same instinct.

I agree on the grades part. And I was just thinking that the university that I went to never gave us grades during the year (the only exception I can think of was when we did practice exam papers so we had an idea how we were doing).

I think homework is more than a guide to what you should review though. It's partly so that the teacher can find out what students have learned/understood so they can adapt their teaching appropriately. It's also because using class/contact time to do work that can be done independently isn't always the best use of that time (at least once students are willing and capable of doing that work independently).


> careful, responsible use is undetectable

I think that's wishful thinking. You're underestimating how much people can tell about other people with the smallest amount of information. Humans are highly attuned to social interactions and synthetic responses are more obvious that you think.


I was a TA years ago, before there were LLMs one could use to cheat effectively. The professor and I still detected a lot of cheating. The problem was what to do once you've caught it? If you can't prove that it's cheating -- you can't cite the sources copied from -- is it worth the fight? The professor's solution was just to knock down their grades.

At that time just downgrading them was justifiable, because though they had copied in someone else's text, they often weren't competent to identify the text that was best to copy, and they had to write some of the text themselves to make it appear a coherent whole and they weren't competent to do that. If they had used LLMs we would have been stuck. We would be sure they had cheated but their essay would still be better than that of many/most of their honest peers who had tried to demonstrate relevant skill and knowledge.

I think there is no solution except to stop assigning essays. Writing long form text will be a boutique skill like flint knapping, harvesting wild tubers, and casting bronze swords. (Who knows, the way things are going these skills might be relevant again all too soon.)


> You can't ask people to not use AI when careful, responsible use is undetectable.

You can't make a rule, if people can cheat undetectably?


You can, but it would be pointless since it would just filter out some honest people.

But this is exactly what we already do. Most exams have a "no cheating" rule, even though it's perfectly possible to cheat. The point is to discourage people from doing so, not to make it impossible.

Sorry, but if you are completely unable to catch cheaters, then the point becomes to punish people that follows rules.

>In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.

No, the homework is a proxy for measuring what the homework provider is interested to evaluate (hopefully). Talent is a vague word, and what we can consider talent worth nurturing might be consider worthless from some other perspective.

For example, most schools will happily give you many nation myths to learn and evaluate how much of it you can restitute on demand. They will far less likely ask you to provide you some critics of these myths, to search who created them, with which intentions and which actual effects on people at large scale from different perspective and metrics available out there.


It's a warning sign, designed to improve the signal they are interested in marginally. Some n-% of applicants will reconsider. That's all it needs to do, to make it worth it, because putting that one sentence there required very little effort.

>In life there is no cheating

Huh?

>You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.

Isn't that the definition of cheating? Presenting a false level of talent you don't possess?


>AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.

There's a difference between knowing how to use a calculator and knowing how to do math. The same goes for AI. Being talented at giving AI prompts doesn't mean you are generally talented or have AI-unrelated talents desired by an employer.


> In life there is no cheating

This really rubs me the wrong way because it reflects the shallow, borderline sociopathic stance of nowadays entrepreneurship.

Obedience to rules and honest attitudes is more than just an annoyance in your way to get rich, its the foundation of cooperation -- our civilization.


You can enforce people to not use AI by making the intrview in the office.

> In life there is no cheating

Oh, my sweet summer child


Eh, you can only learn how to do something when you actually do it (generally).

AI just lets you get deeper down the fake-it-until-you-make-it hole. At some point it will actually matter that you know how to do something, and good luck then.

Either for you, or for your customers, or both.


If I want to assess a candidates performance when they can't use AI then I think I'd sit in a room with them and talk to them.

If I ask people not to use AI on a task where using AI is advantageous and undetectable then I'm going to discriminate against honest people.


But they don't want to do that.

They want to use AI in their hiring process. They want to be able to offload their work and biases to the machine. They just don't want other people to do it.

There's a reason that the EU AI legislation made AI that is used to hire people one of the focal points for action.


I think this gets to the core of the issue: interviews should be conducted by people who deeply understand the role and should involve a discussion, not a quiz.

Is it advantageous? The AI generated responses to this question are prone to be dull.

It might even give the honest people an advantage by giving them a tip to answer on their own.


This application requirement really bothered me as someone who's autistic and dyslexic. I think visually, and while I have valid ideas and unique perspectives, I sometimes struggle to convert my visual thoughts into traditional spoken/written language. AI tools are invaluable to me - they help bridge the gap between my visual thinking and the written expression that's expected in professional settings.

LLMs are essentially translation tools. I use them to translate my picture-thinking into words, just like others might use spell-checkers or dictation software. They don't change my ideas or insights - they just help me express them in a neurotypical-friendly format.

The irony here is that Anthropic is developing AI systems supposedly to benefit humanity, yet their application process explicitly excludes people who use AI as an accessibility tool. It's like telling someone they can't use their usual assistive tools during an application process.

When they say they want to evaluate "non-AI-assisted communication skills," they're essentially saying they want to evaluate my ability to communicate without my accessibility tools. For me, AI-assisted communication is actually a more authentic representation of my thoughts. It's not about gaining an unfair advantage - it's about leveling the playing field so my ideas can be understood by others.

This seems particularly short-sighted for a company developing AI systems. Shouldn't they want diverse perspectives, including from neurodivergent individuals who might have unique insights into how AI can genuinely help people think and communicate differently?


This is an excellent comment and it more-or-less changes my opinion on this issue. I approached it with an "AI bad" mentality which, if truth be told, I'm still going to hold. But you make a very good argument for why AI should be allowed and carefully monitored.

I think it was the spell-checker analogy that really sold me. And this ties in with the whole point that "AI" isn't one thing, it's a huge spectrum. I really don't think there's anything wrong with an interviewee using an editor that highlights their syntax, for example.

Where do you draw the line, though? Maybe you just don't. You conduct the interview and, if practical coding is a part of it, you observe the candidate using AI (or not) and assess them accordingly. If they just behave like a dumb proxy, they don't get the job. Beyond that, judge how dependant they are on AI and how well they can use it as a tool. Not easy, but probably better than just outright banning AI.


Exactly - being transparent about AI usage in interviews makes much more sense. Using AI effectively is becoming a crucial skill, like knowing how to use any other development tool. Using it well can supercharge productivity, but using it poorly can be counterproductive or even dangerous.

It's interesting that most software departments now expect their staff to use AI tools day-to-day, yet many still ban it during interviews. Why not flip this around? Let candidates demonstrate how they actually work with AI. It would be far more valuable to assess someone's judgment and skill in using these tools rather than pretending they don't exist.

If a candidate wants to show how they leverage AI in their workflow, that should be seen as a positive - it demonstrates transparency and real-world problem-solving approaches. After all, you're hiring someone for how they'll actually work, not how they perform in an artificial AI-free environment that doesn't reflect reality.

The key isn't whether someone uses AI, but how effectively they use it as part of their broader skillset. That's what companies should really be evaluating.


I feel very similarly. I'm also an extremely visual thinker who has a job as a programmer, and being able to bounce ideas back and forth between a "gifted intern" and myself is invaluable (in the past I used to use actual interns!)

I regard it as similar to using a text-to-speech tool for a blind person - who cares how they get their work done? I care about the quality of their work and my ability to interact with them, regardless of the method they use to get there.

Another example I would give: imagine there's someone who only works as a pair programmer with their associate. Apart, they are completely useless. Together, they're approximately 150% as productive as any two programmers pairing together. Would you hire them? How much would you pay them as a pair? I submit the right answer is yes, and something north of one full salary split in two. But for bureaucracy I'd love to try it.


I’m going to echo oneeyedpigeon - you changed my opinion completely. Minutes ago, I did not see a thing wrong with what Anthropic was doing. Reading your words completely flipped that.

This is exceptional writing and I appreciate your insight. Thanks for proving me wrong and giving me the chance to get closer to being right.


This is quite a conundrum. These AI companies thrive on the idea that very soon people will not be replaced by AI, but by people who can effectively use AI to be 10x more productive. If AI turns a normal coder into a 10x dev, then why wouldn't you want to see that during an interview? Especially since cheating this whole interview system has become trivial in the past months. It's not the applicants that are the problem, it's the outdated way of doing interviews.

Because as someone who’s interviewing, I know you can use AI — anyone can. It likely obscures me from judging the pitfalls, and design and architecture decisions that are required in proper engineering roles. Especially for senior and above applications, I want to make an assessment of how you think about problems, where it gives a chance for the candidate to show their experience, their technical understanding, and their communication skills.

We don’t want to work with AI, we are going to pay the person for the persons time, and we want to employ someone who isn’t switching off half their cognition when a hard problem approaches.


No, not everyone can really use AI to deliver something that works.

And ultimately, this is what this is about, right? Delivering working products.


> No, not everyone can really use AI to deliver something that works

"That works" is doing a lot of heavy lifting here, and really depends more on the technical skills of the person. Because, shocker, AI doesn't magically make you good and isn't good itself.

Anyone can prompt an AI for answers, it takes skill and knowledge to use those answers in something that works. By prompting AI for simple questions you don't train your skill/knowledge to answer the question yourself. Put simply, using AI makes you worse at your job - precisely when you need to be better.


"Put simply, using AI makes you worse at your job - precisely when you need to be better."

I don't follow.

Usually jobs require deliver working things. The more efficient the worker knows his tools(like AI), the more he will deliver -> the better he is at his job.

If he cannot deliver reliable working things, because he does not understand the LLM output, then he fails at delivering.


You cannot just reduce programming to "deliver working things", though. For some tasks, sure, "working" is all that matters. For many tasks, though, efficiency, maintainability, and other factors are important.

You also need to take into account how to judge if something is "working" or not — that's not necessarily a trivial task.


That is part of "deliver working things".

A car hold together by duct tape is usually not considered a working car or road save.

Same with code.

"You also need to take into account how to judge if something is "working" or not — that's not necessarily a trivial task."

Indeed, and if the examiner cannot do that, he might be in a wrong position in the first place.

If I am presented with code, I can ask the person what it does. If the person does not have a clue - then this shows quickly.


Completely agree. I'm judging the outputs of a process, I really am only interested in the inputs to that process as a matter of curiosity.

If I can't tell the difference, or if the AI helps you write drastically better code, I see it as no more nor no less than, for example, pair programming or using assistive devices.

I also happen to think that most people, right now, are not very good at using AI to get things done, but I also expect those skills to improve with time.


Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc. For the record, I'm not just saying "AI bad"; I've come around to some use of AI being acceptable in an interview, provided it's properly assessed.

> Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc

Agreed, but I as the "end user" care not at all whether you're running a local LLM that you fine tune, or storing it all in your eidetic memory, or writing it down on post it notes that are all over your workspace[1]. Anything that works, works. I'm results oriented, and I do care very much about the results, but the methods (within obvious ethical and legal constraints) are up to you.

[1] I've seen all three in action. The post-it notes guy was amazing though. Apparently he had a head injury at one point and had almost no short term memory, so he coated every surface in post-its to remind himself. You'd never know unless you saw them though.


I think we're agreeing on the aim—good results—but disagreeing on what those results consist of. If I'm acting as a 'company', one that wants a beneficial relationship with a productive programmer for the long-term, I would rather have [ program that works 90%, programmer who is 10% better at their job having written it ] as my outputs than a perfect program and a less-good programmer.

I take epistemological issue with that, basically, because I don't know how you measure those things. I believe fundamentally that the only way to measure things like that is to look at the outputs, and whether it's the system improving or the person operating that system improving I can't tell.

What is the difference between a "less good programmer" and a "more good programmer" if you can't tell via their work output? Are we doing telepathy or soul gazing here? If they produce good work they could be a team of raccoons in a trench coat as far as I'm aware, unless they start stealing snacks from the corner store.


There is also a skill in prompting the AI for the right things in the right way in the right situations. Just like everyone can use google and read documentation, but some people are a lot better at it than others.

You absolutely can be a great developer who can't use AI effectively, or a mediocre developer who is very good with AI.


> not everyone can really use AI to deliver something that works.

That's not the assumption. The assumption is that if you prove you have a firm grip on delivering things that work without using AI, then you can also do it with AI.

And that it's easier to test you when you're working by yourself.


I see this line of "I need to assess your thinking, not the AI's" thinking so often from people who claim they are interviewing, but they never recognize the elephant in the room for some reason.

If people can AI their way into the position you are advertising, then at least one of the following two things have to be true:

1) the job you are advertising can be _literally_ solved by AI

2) you are not tailoring your interview process properly to the actual job that the candidate will need to do, hence the handwave-y "oh well harder problems will come up later that the AI will not be able to do". Focus the interview on the actual job that the AI can't do, and your worries will disappear.

My impression is that the people who are crying about AI use in interviews are the same people who refuse to make an effort themselves. This is just the variation of the meme where you are asked to flip a red black tree on a whiteboard, but then you get the job, and your task is to center a button with CSS. Make an effort and focus your interview on the actual job, and if you are still worried people will AI their way into it, then what position are you even advertising? Either use the AI to solve the problem then, or admit that the AI can't solve this and stop worrying about people using it.


Right now there’s a lot of engineering that falls outside SWE — think datacenter architecture or integrations, or security operations, or designing automotive assemblies. There are also dozens of components of a job that require context windows measured in the _years_ of experience — knowing what will work at scale and what won’t, client interfacing, communicating decisions down inside organisations through to customer support networks about how they are required to operate.

When we’re hiring for my role, Security Operations, I can’t have someone googling or asking AI what to do during an cyber security incident, but they can certainly use AI as much as they want when writing automations.

I reject candidates at all stages for all sorts of reasons, but more and more candidates believe the job can be done with AI. If we wanted AI, we will probably go wholesale and not include the person asking for the job to do the typing for us.

We’re not crying due to AI, we’re crying over the dozens of lost hours of interviews we’re having to conduct where it’s business critical that people know their stuff — engineering positions with consequences (banks, infrastructure, automotive). There isn’t space for “well I didn’t write the code”.


>We don’t want to work with AI, we are going to pay the person for the persons time

If your interview problems are representative of the work that you actually do, and an AI can do it as well as a qualified candidate, then that means that eventually you'll be out-competed by a competitor that does want to work with AI, because it's much cheaper to hire an AI. If an AI could do great at your interview problems but still suck at the job, that means your interview questions aren't very good/representative.


Interview problems are never representative of the work that software developers do.

Sounds like the interview process needs to be improved then?

It’s more like “can you design something with the JWT flow” and the candidates scramble to learn what JWT is in the interview to impress us, instead of asking us to remind them on the specifics on JWT. They then get it wrong, and waste the interviewers time.

Then they shouldn't use libraries, open source code or even existing compilers. They shouldn't search online (man pages is OK). They should use git plumbing commands and sh (not bash of zsh). They should not have potable water in there house but distill river water.

There is a balance to be struck. You obviously don't expect a SWE to begin by identifying rare earth metal mining spots on his first day.

Where the line is drawn is context dependent, drawing the same single line for all possible situations is not possible and it's stupid to do so.


Very very true! Give them a take home assignment first and if they have a good result on that, give them an easier task, without AI, in person. Then you will quickly figure out who actually understands their work

If the interview consists of the interviewer asking "Write (xyz)", the interviewee opening copilot and asking "Write (xyz)", and accepting the code. What was the point of the interview? Is the interviewee a genius productive 10x programmer because by using AI he just spent 1/10 the time to write the code?

Sure, maybe you can say that the tasks should be complex enough that AI can't do it, but AI systems are constantly changing, collecting user prompts and training to improve on them. And sometimes the candidates aren't deep enough in the hiring process yet to justify spending significant time giving a complex task. It's just easier and more effective to just say no AI please


It's not a conundrum, they're selling snake oil. (Come on people, we've been through this many times already.)

If an AI can do your test better than a human in 2025 it reflects not much better on your test than if a pocket calculator could do your test better than a human in 1970.

That did happen and the result from the test creators was the same back then: "we're not the problem, the machines are the problem. ban them!"

In the long run it turned out that if you could cheat with a calculator though, it was just a bad test....

I think there is an unwillingness to admit that there is a skill issue here with the test creators and that if they got better at their job they wouldnt need to ban candidates from using AI.

It's surprising to hear this from anthropic though.


I do lots of technical interviews in Big Tech, and I would be open to candidates using AI tools in the open. I don't know why most companies ban it. IMO we should embrace them, or at least try to and see how it goes (maybe as a pilot program?).

I believe it won't change the outcomes that much. For example, on coding, an AI can't teach someone to program or reason in the spot, and the purpose of the interview never was to just answer the coding puzzle anyway.

To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc). If I give you a puzzle and you paste the most optimized answer with no reasoning or comment you're not going to pass the interview, no matter if it's done with AI, from memory or with stack overflow.

So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.


> So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.

Candidates could also have an AI listening to the questions and giving them answers. There are other ways that they could be in the process without copy/pasting blindly.

> To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc).

Exactly, that's why I feel like saying "AI is not allowed" makes it all more clear. As interviewers we want to see these abilities you have, and if candidates use an AI it's harder to know what's them and what's the AI. It's not that we don't think AI is an useful tool, it's that it reduces the amount of signal we get in an interview; and in any case there's the assumption than the better someone performs the better they could use AI.


You could also learn a lot from what someone is asking an AI assistant.

Someone asking: "solve this problem" vs "what is the difference between array and dict" vs "what is the time complexity of a hashmap add operation", etc.

They give you different nuances on what the candidate knows and how it is approaching the understanding of the problem and its solution.


It's a new spin on the old leetcode problem - if you are good at leetcode you are not necessarily a good programmer for a company.

Kudos to Anthropic. The industry has way too many workers rationalizing cheating with AI right now.

Also, I think that the people who are saying it doesn't matter if they use AI to write their job application might not realize that:

1. Sometimes, application questions actually do have a point.

2. Some people can read a lot into what you say, and how you say it.


> While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.

Full quote here; seems like most of the comments here are leaving out the first part.


The irony here is obvious, but what's interesting is that Anthropic is basically asking to not give then a realistic preview of how you will work.

This feels similar to asking devs to only use vim during a coding challenge and please refrain from using VS Code or another full featured IDE.

If you know, and even encourage, your employees to use LLMs at work you should want to see how well candidates present themselves in that same situation.


It’s hardly that. This is one component of an interview process - not all of it!

I'm out of context here as I'm not applying to Anthropic, not surprised at all if I'm missing details of the full process!

If this is just for a written part of the process or something, maybe I get it? But even then, if you expect employees to us LLMs I'd really want to see how well they interview with the LLMs available.


I still don't know how to quit Vim without googling for instructions :P

As an anecdote from my time at uni, I can share that all our exams were either writing code with pen on paper for 3-4 hours, or a take-home exam that would make up 50% of the final grade. There was never any expectation that students would use pen and paper on their take-home exams. You were free to use your books and to search the web for help, but you were not allowed to copy any code you found without citing it. Also not allowed to collaborate with anyone.


Half way through a recent interview it became very apparent that the candidate was using AI. This was only apparent in the standard 'why are you interested in working here?' Questions. Once the questions became more AI resistant the candidate floundered. There English language skills and there general reasoning declined catastrophically. These question had originally been introduced to see see how good the candidate was at thinking abstractly. Example: 'what is your creative philosophy?'

>There English language skills... declined catastrophically.

Let he who is without sin...


Point taken

> what is your creative philosophy?

Seriously?


Yep. Some candidates really enjoy the question to the point where it becomes difficult to get them to stop answering it.

Have you noticed a correlation in candidate quality vs the time spent answering that question?

> Have you noticed a correlation in candidate quality vs the time spent answering that question?

Yes, certainly.

Over the years our skill at interviewing has grown. We try to craft a balance between asking questions which require exact responses, and those which the candidate is free to improvise upon.


It's also a personal question, not a "why should someone work here", but a "what motivates YOU"

As someone for whom the answer is always 'money' I learned very quickly that a certain level of -how should I call it- bullshit is necessary to get the HR person to pass my CV to someone competent. As I am not as skilled in bullshit as I am in coding, it would make sense to outsource that irrelevant part of the selection process, no?

Maybe it makes sense for you, however from their perspective, it's not an "irrelevant" part of the selection process, but the most important part.

You can use an AI assistant to help you fix grammar and come up with creative reasons why you should work there.

This adds another twist, since I'd bet nowadays most CVs are processed (or at least pre-screened) by "AI": we're in a ridiculous situation where applicants feed a few bullet points to AI to generate full-blown polished resumes and motivational letters … and then HR uses different AI to distil all that back to the original bullet points. Interesting times.

This makes me think about adversarial methods of affecting the outcome, where we end up with a "who can hack the metabrain the best" contest. Kind of like the older leet-code system, where obviously software engineering skills were purely secondary to gamesmanship.

It's a bad question. What is actually being tested here is whether the candidate can reel off an 'acceptable' motivation. Whether it is their motivation or not. This is asking questions that incentivize disingenuous answers (boo) and then reacting with pikachu shock when the obvious outcome happens.

It makes sense. Having the right people with the right merits and motivations will become even more important in the age of AI. Why you might ask? Execution is nothing when AI matures. Grasping the big picture, communicating effectively and possessing domain knowledge will be key. More roles in cognitive work will become senior positions. Of course you must know how to make the most out of AI, but it is more interesting what skills you bring to the table without it.

> Grasping the big picture, communicating effectively and possessing domain knowledge will be key

But isn't this all the things AI promises to solve for you?


It's almost like what people have been saying for years: there's the promise of AI and the reality of AI - and they're 2 very different things. They only look similar to a layman without experience in the field.

>communicating effectively

AI may not yet be as good an engineer as most coders, but it's already absolutely much better at written communication than the average sotware engineer (or at least more willing to put effort into it).


Funny on the tin, but it makes complete sense to me. A sunglasses company would also ask me to take off my sunglasses during the job interview, presumably.

Thanks for the amusing mental image.

You wouldn't show up drunk to a job interview just because its at brewery, would you?

I guarantee you the lawns at the lawnmower manufacturer are not cut with scissors.

Isn't Anthropic's stated goal to make "nice" self-improving general AI? Cut out the middleman and have the AI train the next generation.


If getting drunk significantly increased my ability to appear competent then, yes, I would...

If the brewery is selling and promoting drinking at work, then yes.

Never get high on your own supply ;)

Anthropic is kind of positioning themselves as the "we want the cream of the crop" company (Dario himself said as much in his Davos interviews), and what I could understand was that they would a) prefer to pick people they already knew b) didn't really care about recruiting outside the US.

Maybe I read that wrong, but I suspect they are self-selecting themselves out of some pretty large talent pools, AI or not. But that application note is completely consistent with what they espouse as their core values.


Do they also promise not to use ai to evaluate the answers?

Also will they be happy to provide 200-400 word reasoning how answer to their question was evaluated for each and every candidate? Written by a human.

I would 100% expect a company to not use AI to evaluate candidates and, if they are, I wouldn't want to work there. That's far worse than using AI as the candidate.

Everyone arguing for LLMs as a corrupting crutch needs to explain why this time is different: why the grammar-checkers-are-crutches, don't-use-wikipedia, spell-check-is-a-crutch, etc. etc. people were all wrong, but this time the tool really is somehow unacceptable.

It also depends on what you're hiring for. If you want a proofreader you probably want to test their abilities to both use a grammar checker, and work without it.

For me the difference is that using an LLM requires a insane amount of work from the interviewer. Fair enough that you'd use Copilot day to day, but can you actually prompt it? Are you able to judge the quality of the output (or where you planning on just pawning that of to your code-reviewer). The spell checker is a good example, do you trust it blindly, or are you literate enough to spot when it makes mistakes?

The "being able to spot the mistakes" is what an interviewer wants to know. Can you actually reason about a problem and sadly many cannot.


You forgot calculators ;)

Definitely agreed. And slide rules, and log tables, and...

The goal of an interview is to assess talent. AI use gets in the way of that. If the goal were only to produce working code, or to write a quality essay, then sure use AI. But arguing that misunderstands the point of the interview process.

Disclaimer: I work at Anthropic but these views are my own.


Not new they had that 5 years ago at least.

Anthropic interview is nebulous. You get a coding interview. Fast paced, little time, 100% pass mark.

Then they chat to you for half an hour to gauge your ethics. Maybe I was too honest :)

I'm really bad at the "essay" subjects vsm the "hard" subjects so at that point I was dumped.


I recently took their CodeSignal assessment, which is part of their initial screening process.

Oh, wow. I really believe they are missing out on great engineers due to the nature of it.

90 minutes to implement a series of increasingly difficult specs and pass all the unit tests.

There is zero consideration for quality of code. My email from the recruiter said (verbatim), “the CodeSignal screen is intended to test your ability to quickly and correctly write and refactor working code. You will not be evaluated on code quality.”

It was my first time ever taking a CodeSignal assessment and there was really no way to prepare for it ahead of time.

Apparently, I can apply again in 6 months.


You can definitely pass their interview without solving everything.

If your test can be done by an LLM maybe you shouldn't be hiring a human being based on that test...

It’s cause they wanna use the data to train AI on and traininy AI on AI is useless.

How much you wanna bet they're using AI to evaluate applicants and they don't even have a human reading 99% of the applications they're asking people to write?

As someone who has recently applied to over 300 jobs, just to get form letter rejections, it's really hard to want to invest my time to hand-write an application that I know isn't even going to be read by a human.


It’s always the popular clubs that make the most rules

Pretty ironic that they use an automated system called CodeSignal that does the first round of the interviews

Relevant (and could probably have been a comment there): https://news.ycombinator.com/item?id=42909166 "Ask HN: What is interviewing like now with everyone using AI?"

I understand why it's amusing, but there is really nothing to see here. It could be rephrased as:

« The process we use to asses candidates relies on measuring the candidate's ability to solve trivia problems that can easily be solved by AI (or internet search or impersonation etc). Please refrain from using such tools until the industry come up with a better way to assess candidates. »

Actually, since the whole point of those many screening levels during hiring is to avoid the cost of having long, in depth discussions between many experts and each individual candidates, probably IA will be the solution that makes the selection process a bit less reliant on trivia quizz (a solution that will, no doubt, come with its own set of new issues).


I'm sure Anthropic have too many applications submitted that are obviously AI generated, and I am sure what they mean by "non-AI-assisted communication", they don't want "slop" applications, that sounds like a LLM wrote it. They want some greater proof of human ability. I expect humans at Anthropic can tell what LLM model was used to rewrite (or polish) applications they get, but if they can't, a basic BERT classifier can (I've trained one for this task, it's not so hard).

Prepping for an interview a couple weeks ago, I grabbed the latest version of IntelliJ. I wanted to set up a blank project with some tests, in case I got stuck and wanted to bail out of whatever app they hit me with and just have unit tests available.

So lacking any other ideas for a sample project I just started implementing Fizzbuzz. And IntelliJ started auto suggesting the implementation. That seems more problematic than helpful, so it was a good thing I didn’t end up needing it.


Why aren't they dog fooding? Surely if AIs improve output and performance they should readily accept input from them. Seems like they don't believe in their own products.

This probably means they are completely unable to differentiate between AI and non-AI else they would just discard the AI piles of applications.

This feels very similar to ophthalmologists who make their money pushing LASIK while refusing to get it done on themselves or their relatives. "This procedure is life-changing! But..."

Anyway, bring back in-person interviews! That's the only way to work around this Pandora's Box they themselves opened.


How do you guys do coding assessments nowadays with AI?

I don’t mind if applicants use it in our tech round but if they do I question them on the generated code and potential performance or design issues (if I spot any) - but not sure if it’s the best approach (I mostly hire SDETs so do a ‘easy’ dev round with a few easy/very easy leet code questions that don’t require prep)


This strikes me as similar to job applicants who apply for a position and are told it's hybrid or in-office - and then on the day of the interview, it suddenly changes from one in-person to one held over a videoconference, with the other participants with backdrops that look suspiciously like they're working from home.

Aha: maybe they want to train their AI on their applicant’s / job seekers text submissions :D

slop for thee but not for me

don't get high off your own supply

This has a poetic tone to it.

However, not sure what to think of it. So AI should help people on their job and their interview process, but also not? When it matters? What if you're super good ML/AI, but very bad at doing applications? Would you still have a chance?

Or do you get filtered out?


Only if they stop screening us with their shitty AI first. Otherwise it is slop vs slop.

So suddenly we're in a state where:

- AI companies ask candidates to not "eat their own dog food" - AI companies blames each other of "copying" their IP while they find it legit to use "humans" IP for training.


You want to work at an AI company that does not allow the use of AI by it's future employees.

That is likely enough said right there. Keep looking for a company that has it's head screwed on straight.


I'd be fine with this if they agree to not use AI to assess you as a candidate.

Don't get high on your own supply, like zuck doing the conquistador in Kaua'i

> We want to understand your personal interest in Anthropic without mediation through an AI system

Is the application being reviewed with the help of an AI assistant though? If yes, AI mediation is still taking place.


On the face of it it's a reasonable request but the question itself is pointless. An applicants outside opinion on a company is pretty irrelevant and is subject to a lot of change after starting work.

Cool. Does that mean Anthropic is not using ATS to scan resumes?

Of course it doesn’t…


Only if they stop screening us with their shitty AI first.

I generally trust Anthropic vs others, I think Claude (beyond obligatory censorship) ticks all the right boxes and strikes the right balance

Plot twist: They are actually looking for the freethinkers who are subversive enough to still use AI assistants.

this remember me an old interview, years ago, when they asked me to code something "without using Google"....

TBH its motivated me to apply with AI to try somehow to get away with it.

(I need to reevaluate my work load and priorities.)


Funny this massive irony just out, as I don’t think I’ll renew my subscription with them because of R1.

That's hilarious. A comedy script couldn't beat real life in 2025.

Evaluators neither but here we are

AI for thee but not for me?

This insistence of using only human intelligence reminds me of the quest for low-background steel.

this is a reasonable request, provided there is a human on the other side who is going to read the 200-400 word response, and make a judgment call.

>Why do you want to work at Anthropic? (We value this response highly - great answers are often 200-400 words.)

Low lifes


The HR are probably using AI to waste our time with their ridiculously worded job descriptions and now you can have a computer respond... You have simply completed the circle of stupidity. If they are upset you have sidestepped putting yourself inside their circle, maybe there is a better place to work after all...

Much better approach is to ask the candidate about the limitations of AI assistants and the rakes you can step on while walking that path. And the rakes you have already stepped on with AI.

Seems reasonable.

At least someone realizes the soulless unimaginative mediocrity machine makes people sound soulless, unimaginative, and mediocre.

So I guess people should not use other available tools? Spell checker? Grammar checker? The Internet? Grammarly?

The issue is that they are receiving excellent responses from everyone and can no longer discriminate against people who are not good at writing.


Whenever someone asks you to not do something that is victimless, you always should think about the power they are taking away from you, often unfairly. It is often the reason why they have power over you at all. By then doing that very thing, you regain your power, and so you absolutely should do it. I am not asking you to become a criminal, but to never be subservient to a corporation.

Beyond ridiculous. I am lacking words on how stupid this statement is coming from the AI company who enables all this crap.

Maybe they are ahead of the curve at finding that hiring people based on ability to exploit AI-augmented reach produces catastrophically bad results.

If so, that's bad for their mission and marketing department, but it just puts them in the realm of a tobacco company, which can still be quite profitable so long as they don't offer health care insurance and free cigarettes to their employees :)

I see no conflict of interest in their reasoning. They're just trying to screen out people who trust their product, presumably because they've had more experience than most with such people. Who would be more likely to attract AI-augmented job applicants and trust their apparent augmented skill than an AI company? They would have far more experience with this than most, because they'd be ground zero for NOT rejecting the idea.


Question 1:

Write a program that describes the number of SS's in "Slow Mississippi bass". Then multiply the result by hex number A & 2.

Question 2:

Do you think your peers will haze you week 1 of the evaluation period? [Yes|No]

There are a million reasons to exclude people, and most HR people will filter anything odd or extraordinary.

https://www.youtube.com/watch?v=TRZAJY23xio&t=1765s

Hardly a new issue, =3


If Alice can do better against Bob when they aren’t using AI, but Bob performs better when both use AI, isn’t it in the company’s best interest to hire Bob, since AI is there to be used during his position duties?

If graphic design A can design on paper better that B, but B can design on the computer better than A, paper or computer, why would you hire A?


That's totally reasonable, imo. You also can't look up the answers using a search engine during your application to work at Google

That's actually not reasonable.

Why not

This probably depends on the questions.

When applying for a college math professor job, it's understandable that you'll use mathematica/matlab/whatever for most of actual work, but needing a calculator for simple mutliplication-table style calculations would be a red flag. Especially if there is lecturing involved.


application or interview?

this is about application.


Good luck with that

> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.

Exact opposite of our application process at my previous company. We said usage of ChatGPT was expected during the application and interview phase, since we heavily rely on it for work


Wow. I heavily rely on Google for work, wouldn't expect a candidate spend precious interview time googling though.

There are a bunch of subtly different ways to perform a coding interview.

If the interviewer points you at a whiteboard and asks you how to reverse an array, most likely they're checking you know what a for loop is and how to index into an array, and how to be careful of off-by-one errors. Even if your language has a built-in library function for doing this, they'd probably like you to not use it.

If the interviewer hands you a laptop with a realistic codebase on it and asks you to implement e-mail address validation, they're going for a more real-world test. Probably they'll be fine with you googling for an e-mail address validation regex, what they want to see is that you do things like add unit tests and whatnot.


Makes sense. I've never been asked to make such an exercise in real time, most of the time that would be a take home task - but I understand if someone wants to do that. Still it would be weird to demand that a candidate uses Google, wouldn't it?

I picked e-mail validation as an example precisely because it's something even experienced developers would be well advised to google if they want to get it right :)

Of course, if someone can get it right off the top of their head, more power to them!


> We said usage of ChatGPT was expected during the application and interview phase, since we heavily rely on it for work

You must have missed out on hiring some very good candidates, then.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: