Hacker News new | past | comments | ask | show | jobs | submit login

I'm not so sure about that. I was really Anti llm in the previous generation of LLMs (GPT3.5/4) but never stopped trying them out. I just found the results to be subpar.

Since reasoning models came about I've been significantly more bullish on them purely because they are less bad. They are still not amazing but they are at a poiny where I feel like including them in my workflow isn't an impediment.

They can now reliably complete a subset of tasks without me needing to rewrite large chunks of it myself.

They are still pretty terrible at edge cases ( uncommon patterns / libraries etc ) but when on the beaten path they can actually pretty decently improve productivity. I still don't think 10x ( well today was the first time I felt a 10x improvement but I was moving frontend code from a custom framework to react, more tedium than anything else in that and the AI did a spectacular job ).






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: