Hacker News new | past | comments | ask | show | jobs | submit login

In addition to commercialization providing money for AI development, isn't there also the argument that prudent commercialization is the best way to test the models for possible dangers? I think I saw Mira Murati take that position in an interview. In other words, creating a product that people want to use so much that they are willing to pay for it is a good way to stress-test the product.

I don't know if I agree, but the argument did make me think.




Additionally, when you have a pre-release product that has largely passed small and artificial tests, you get diminishing returns on continued testing.

Eventually you need to expand, despite some risk, to push the testing forward.

Everyone has a different opinion on what level of safety AI should reach before it's released. "Makes no mistakes" and "never says something mean" are not attainable goals vs "reduce the rate of hallucinations, as defined by x, to <0.5% of total respinses" and "given a set of known and imagined scenarios, new Model continues to have a zero false-negative rate".

When it's an engineering problem we're trying to solve, we can mqke progress, but no company can avoid all forms of harm as defined by everyone.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: