Hacker News new | past | comments | ask | show | jobs | submit login

One of the more interesting aspects from this entire saga was that Helen Toner recently wrote a paper critical of OpenAI and praising Anthropic.

Yet where OpenAI’s attempt at signaling may have been drowned out by other, even more conspicuous actions taken by the company, Anthropic’s signal may have simply failed to cut through the noise. By burying the explanation of Claude’s delayed release in the middle of a long, detailed document posted to the company’s website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed [1].

That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.

[1] https://cset.georgetown.edu/publication/decoding-intentions/




Not to mention this statement ... imagine such a person on your startup board!

During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.

Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.


>imagine such a person on your startup board!

Yeah, such a person totally blocks your startup from making billions of dollars instead of benefitting humanity.

Oh wait...


The other plausible explanation is that Helen Toner doesn’t care as much about safety as about her personal power and clinging to the seat which gives her importance. Saying it’s for safety is very easy and the obviously popular choice if you want to hide your motives. The remark she made strikes me as borderline narcissistic in retrospective.


If we want to play that game you could easily say Altman's critique of her wasn't to protect the company but to protect his assets.

Altman is past borderline.


> That is indeed quite the paper to write whilst on the board of OpenAI, to say the least.

It strikes me as exactly the sort of thing she should be writing given OpenAI's charter. Recognizing and rewarding work towards AI safety is good practice for an organization whose entire purpose is the promotion of AI safety.


Yeah, on one hand, the difference between a charity oriented around a mission like OpenAI's nominal charter and a business is that the former naturally ought to be publicly, honestly introspective -- its mission isn't private gain, but achieving a public effect, and both recognition of success elsewhere and open acknowledgement of shortcomings of your own is important to that.

On the other hand, its quite apparent that essentially all of the OpenAI workforce (understandably, given the compensation package which creates a financial interest at odds with the nonprofit's mission) and in particular the entire executive team saw the charter as a useful PR fiction, not a mission (except maybe Ilya, though the flip-flop in the middle of this action may mean he saw it the same way, but thought that given the conflict, dumping Sam and Greg would be the only way to preserve the fiction, and whatever cost it would have would be worthwhile given that function.)


And Anthropic doesnt get credit for stopping the robot apocalypse when it was never even possible. AI safety seems a lot like framing losing as winning.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: