Hacker News new | past | comments | ask | show | jobs | submit login

I strongly agree with this comment. Anecdotal evidence time!

I'm an experienced dev (20 years of C++ and plenty of other stuff), and I frequently work with younger students in a mentor role, e.g. I've done Google Summer of Code three times as a mentor, and am also in KDE's own mentorship program.

In 2023/24, when ChatGPT was looming large, I took on a student who was of course attempting to use AI to learn and who was enjoying many of the obvious benefits - availability, tailoring information to his inquiry, etc. So we cut a deal: We'd use the same ChatGPT account and I could keep an eye on his interactions with the system, so I could help him when the AI went off the rails and was steering him into the wrong direction.

He initially made fast progress on the project I was helping him with, and was able to put more working code in place than others in the same phase. But then he hit a plateau really hard soon after, because he was running into bugs and issues he couldn't get solutions from the AI for and he just wasn't able to connect the dots himself.

He'd almost get there, but would sometimes forget to remove random single lines doing the wrong thing, etc. His mental map of the code was poor, because he hadn't written it himself in that oldschool "every line a hard-fought battle" style that really makes you understand why and how something works and how it connects to problems you're solving.

As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.

To his credit, he eventually realized leaning on ChatGPT was holding him back mentally and he tried to take things slower and go back to API docs and slowly building up his codebase by himself.






It's like when you play World of Warcraft for the first time and you have this character boost to max level and you use it. You didn't go through the leveling phase and you do not understand the mechanics of your character, the behaviour of the mobs, or even how to get to another continent.

You are directly loaded with all the shiny tools and, while it does make it interesting and fun at first, the magic wears off rather quickly.

On the other hand, when you had to fight and learn your way up to level 80, you have this deeper and well-earned understanding of the game that makes for a fantastic experience.


'"every line a hard-fought battle" style that really makes you understand why and how something works'

I totally agree with this and I really like that way of wording it.


This is fascinating. The idea of leveling off in the learning curve is one that I hadn't considered before, although with hindsight it seems obvious. Based on your recollection (and without revealing too many personal details), do you recall any specific areas that caused the struggle? For example, was it a lack of understanding of the program architecture? Was it an issue of not understanding data structures? (or whatever) Thanks for your comment, it opened up a new set of questions for me.

A big problem was that he couldn't attain a mental model of how the code was behaving at runtime, in particular the lifetimes of data and objects - what would get created or destroyed when, exist at what time, happen in what sequence, exist for the whole runtime of the program vs. what's a temporary resource, that kind of thing.

The overall "flow" of the code didn't exist in his head, because he was basically taking small chunks of code in and out of ChatGPT, iterating locally wherever he was and the project just sort of growing organically that way. This is likely also what make the ChatGPT outputs themselves less useful over time: He wasn't aware of enough context to prompt the model with it, so it didn't have much to work with. There wasn't a lot of emerging intelligence a la provide what the client needs not what they think they need.

These days tools like aider end up prompting the model with a repo map etc. in the background transparently, but in 2023/24 that infra didn't exist yet and the context window of the models at the time was also much smaller.

In other words, the evolving nature of these tools might lead to different results today. On the other hand, if it had back then chances are he'd become even more reliant on them. The open question is whether there's a threshold there where it just stops mattering - if the results are always good, does it matter the human doesn't understand them? Naturally I find that prospect a bit frightening and creepy, but I assume some slice of the work will start looking like that.


> "every line a hard-fought battle" style that really makes you understand why and how something works

Absolutely true. However:

The real value of AI will be to *be aware* when at that local optimum, and then - if unable to find a way forward - at least reliably notify the user that that is indeed the case.

Bottom line, the number of engineering “hard thought battles” is finite, and should be chosen very wisely.

The performance multiplier that LLM agents brought changed the world. At least as the consumer web did in the 90s, and there will be no turning back.

This is like a computer company around 1980, would be hiring engineers but forbade access to computers for some numerical task.

Funny, it reminds me the reason Konami MSX1 games look like they do, compared to the most of the competition: having access to superior development tools - their HP hardware emulator workstations.

If you are unable to come up with a filter for your applicants that is able to detect your own product, maybe you should evolve. What about asking an AI how to solve this? ;)


I have a feeling that "almost getting there" will simply become the norm. I have seen a lot of buggy and almost but not exactly right applications, processes and even laws that people simply have to live with.

If US can be the worlds biggest economy while having an opiod epidemy and writing paper cheques, if Germany can be Europes manufacturing hub while using faxes, sure we as a society can live in the unoptimal state of everything digital being broken 10% of the time insteaf of hald percent


Using faxes is much more streamlined than the more modern Scan, Email, Print process.

Only if you're starting with paper?

Years back I worked somewhere where we had to PDF documents to e-fax them to a supplier. We eventually found out that on their end it was just being received digitally and auto-converted to PDF.

It was never made paper.. So we asked if we could just email the PDF instead of paying for this fax service they wanted.

They said no.


There was a comment here on HN, I think, that explained why enterprises spend so much money on garbage software. It turned out that the garbage software was a huge improvement on what they did before, so it was still a savings in time and money and easier than a total overhaul.

I wonder what horror of process and machinery the supplier used before the fax->PDF process.


I once worked on a janky, held-together-with-duct-tape-and-bubblegum distributed app written in Microsoft Access. Yes, Microsoft Access for everything, no central server, no Oracle, no Postgres. Data was shared between client and server by HTTP downloads of zipped-up Access .mdb files which got merged into the clients' main database.

The main architect of the app told me, "Before we came along, they wer doing all this with Excel spreadsheets. This is a vast improvement!"


there shouldn't ever be a print or scan step in the pipeline.

Found the german!

This seems to be the way of things. Oral traditions were devastated by writing, but the benefit is another civilization can hold on to all your knowledge while you experience a long and chaotic dark age so you don't start from 0 when the Enlightenment happens.

What about people who don't have access to a mentor? If not AI then what is their option? Is doing tutorials on your own a good way to learn?

Write something on your own. When stuck, consult the docs, Google the error message, and ask on StackOverflow (in this order).

There's no royal road to learning.


So, so, so many people have learnt to code on their own without a mentor. It requires a strong desire to learn and perseverance but it’s absolutely possible.

That you can learn so much about programming from books and open source and trial and error has made it a refuge for people with extreme social anxiety, for whom "bothering" a mentor with their questions would be unthinkable.

Not sure! My own path was very mentor-dependent. Participating in open source communities worked for me to find my original mentors as well. The other participants are incentivized to mentor/coach because the main thing you're bringing is time and motivation--and if they can teach you what you need to know to come back with better output while requiring less handholding down the road, their project wins.

It's not for everyone because open source tends to require you to have the personality to self-select goals. Outside of more explicit mentor relationships, the projects aren't set up to provide you with a structured curriculum or distribute tasks. But if you can think of something you want to get done or attempt in a project, chances are you'll get a lot of helping hands and eager teachers along the way.


Mostly by reading a good book to get the fundamentals down, then taking on a project to apply the knowledge and supplement the gap with online ressource. There are good books and nice open source projects out there. You can get far with these by just studying them with determination. Later you can go on the theoretical and philosophical part of the field.

How do you know what a good book is? I've seen recommendations in fields I'm knowledgeable about that were hot garbage. Those were recommendations by reputed people for reputed authors. I don't know how a beginner is supposed to start without trying a few and learning some bad habits.

If you're a beginner, almost any book by a reputable publisher is good. The controversial ideas start at the upper intermediary or advanced level. No beginner knows enough to argue about clean code or the gang of four book.

There is no 'learning' in the abstract. You learn something. Doing tutorials teach you how to do the thing you do in them.

It all comes down to what you wanna learn. If you want to acquire skills doing the things you can ask AI to do, probably a bad idea to use them. If you want to learn some pointers on a field you don't even know what key words are relevant to take to a library, LLMs can help a lot.

If you wanna learn complex context dependent professional skills, I don't think there's an alternative to an experienced mentor.


Failing for a bit, thinking hard and then somehow getting to the answer - for me it was usually tutorials, asking on stackoverflow/forums, finding a random example on some webpage.

The fastest way for me to learn something new is to find working code or code that I can kick for a bit until it compiles/runs. Often I'll comment out everything and make it print hello world, and then from there try to figure out what the essential bits I need to bring back in, or simplify/mock, etc, until it works again.

I learn a lot more by forming a hypothesis "to make it do this, I need that bit of code, which needs that other bit that looks like it's just preparing this/that object" - and the hypothesis gets tested every time I try to compile/run.

Nowadays I might paste the error into chatgpt and it'll say something that will lead me a step or two closer to figuring out what's going on.


Why is modifying working code you didn't write better than having an AI help write code with you? Is it that the modified code doesn't run until you fix it? It still bypasses the 'hard won effort' criteria though?

I forgot to say, the aim is usually to integrate it into a bigger project that I'm writing by myself. The working code is usually for interfacing to libraries I didn't write - I could spend a year reading every line of code for a given library and understanding everything it does, and then realise it doesn't do what I want. The working code is to see what it can do first, or to kick it closer to what I want - only when I know it can do it will I spend the time to fully understand what's going on. Otherwise a hundred lifetimes wouldn't be enough to go through the amount freely available crapware out there.

> As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.

So as a mentor, you totally talked directly with them about what excites them, tied it to their work, encouraged them to talk about their frustrations openly, helped them develop resilience by showing them towards setbacks are part of the process, and helped give them a sense of purpose and see how their work contributes to a bigger picture, to directly address the side effects of being a human with emotions which could have happened regardless of the tool they used, and didn't just let them flounder because of your personal feelings about a particular tool they used, right? Or do you only mentor winners, and you've never had a mentee hit a wall before LLMs were invented and never had to help anyone through some of the impacts from emotional lows that an immature intern might need help from a mentor to work through.


> listless poking in the mud

https://xkcd.com/1838/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: