Hacker News new | past | comments | ask | show | jobs | submit login

I've made this statement a bunch in other mediums: The reason AI software is always "AI software" and not just a useful product is because AI is fallible.

The reason we can build such deep and complex software system is because each layer can assume the one below it will "just work". If it only worked 99% of the time, we'd all still be interfacing with assembly, because we'd have to be aware of the mistakes that were made and deal with them, otherwise the errors would compound until software was useless.

Until AI achieves the level of determinism we have with other software, it'll have to stay at the surface.




Recent work from Meta uses AI to automatically increase test coverage with zero human checking of AI outputs. They do this with a strong oracle for AI outputs: whether the AI-generated test compiles, runs, and hits yet-unhit lines of code in the tested codebase.

We probably need a lot more work along this dimension of finding use cases where strong automatic verification of AI outputs is possible.


> with zero human checking of AI outputs

It can be hard enough for humans to just look at some (already consistently passing) tests and think, "is X actually the expected behavior or should it have been Y instead?"

I think you should have a look at the abstract, especially this quote:

> 75% of TestGen-LLM's test cases built correctly, 57% passed reliably, and 25% increased coverage. During Meta's Instagram and Facebook test-a-thons, it improved 11.5% of all classes to which it was applied, with 73% of its recommendations being accepted for production deployment by Meta software engineers

This tool sounds awesome in that it generated real tests that engineers liked! "zero human checking of AI outputs" is very different though, and "this test passes" is very different from "this is a good test"


Good points regarding test quality. One takeaway for me from this paper is that you can increase code coverage with LLMs without any human checking of LLM outputs, because it’s easy to make a fully automated checker. Pure coverage may not be super-interesting but it’s still fairly interesting and nontrivial. LLM-based applications that run fully autonomously without bubbling hallucinations up to users seem elusive but this is an example.


You hit the nail. It's been almost tragically funny how people frantically tried to juggle 5 bars of wet soap in recent 2 years solving problems that (from what I've seen so far) have been already solved in a (boring) deterministic way consuming much less resources.

Going further, our predecessors put so much work into getting non-deterministic electronics together providing us with a stable and _correct_ platform, it looks ridiculous how people were trying to squeeze another layer of non-determinism in between to solve the same classes of problems.


The irony here is that there are many domains using statistical methods, that bound the complexity and failure modes of statistical methods successfully. A lot of people struggle with statistics but in domains where the glove fits I think AI will slot in all across the stack really nicely.


But software works only 99% of the time. For some definition of work: 99% of days it's run, 99% of clicks, 99% of CPU time in given component, 99% of versions released and linked into some business' production binary, 99% of github tags, 99% of commits, 99% of software that that one guy says is battle-tested


If twenty components work 99% of the time, then they only have an 0.99^20 = 82% chance of working as a collective.

If your 5.1 GHz (billion instructions per second) CPU had a 0.00000001% chance of failing at a given instruction, you'd have a 40% chance of a crash every second.

If a flight had a 1% chance of killing everyone aboard 10 million people/day * 1% = 100,000 people would die every day from a plane.


Gamblers fallacy


Software works so much more than 99% of the time that it's a rather deliberate strawman to claim otherwise.

Newly-"AI"-branded things that I have touched work substantially less than 90% of the time. There are like 3 orders of magnitude difference, even people who aren't paying any attention at all are noticing it.


Do you have to write your code presuming that sometimes 'a + b' will be wrong? I don't.

Software pretty much always "works" when you consider the definition of work to be "does what the programmer told it too". AI? Not so much.


It’s all about limits and edge cases. a+b may “fail” at INT_MAX and at 0.1+0.2. You don’t `==` your doubles, you don’t (a+b)/2 your mid, and you don’t ask ai to just book you vacation. You ask it to “collect average sentiment from `these_5k_reviews()` ignoring apparently fake ones, which are defined as <…>”. You don’t care about determinism because it’s a statistical instrument.


> and you don’t ask ai to just book you vacation. You ask it to “collect average sentiment from `these_5k_reviews()` ignoring apparently fake ones, which are defined as <…>”.

That's exactly my point. You have to interact directly with the A.I. and be aware of what its doing.


That's not true. If software works correctly today then users can expect it to work correctly tomorrow. If it doesn't work any more that's a bug.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: