Hacker News new | past | comments | ask | show | jobs | submit login

software itself isn't intractable, it's that the field is young, and we are stuck with choices made when nothing was understood, and its gonna take a while to turn the ship. but i think we have a pretty good idea of where we are trying to go wrt writing secure software.



> it's that the field is young

The opposite. When the field was in its infancy, one was able to keep whole stacks in his head.

How complicated were CPUs in the 1960s?

How many lines of assembler was in the LM?

How many lines is Linux or FreeBSD kernel? Now add libc.

Now you have a 1970s C compiler.

Now take into account all the optimizations any modern C compiler does. Now make sure there's no bugs _there_.

Now add a Python stack.

Now you can have decent, "safe" code. Most hacks don't target this part. The low hanging fruit is lower.

You need a math library. OK, import that. You need some other library. OK, import that.

Oops, there's a bug in one module. Or the admin setup wasn't done right. Or something blew.

Bam. You have the keys to the kingdom.

And this is all deterministic. Someone _could_ verify that there are no bugs here.

But what about Neural Networks? The whole point of training is that the programmers _can't_ write a deterministic algorithm to self drive, and have to have a huge NN do the heavy lifting.

And that's not verifiable.

_This_ is what's going to be running your self-driving car.

That's why I compared software engineering to biology, where we "test" a lot, hope for the best, and have it blow up in our face a generation later.


The need to hold whole stacks in the head is the problem. That's not abstraction. That's not how math works. The mouse doesn't escape the wheel by running faster.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: