Hacker News new | past | comments | ask | show | jobs | submit login

My guess is that the non-profit has never gotten this kind of scrutiny now and the new directors are going to want to get lawyers involved to cover their asses. Just imagine their positions when Sam Altman really does something worth firing.

I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess. Imagine the fun when it tips into a private foundation status.




> I think it was a real mistake to create OpenAI as a public charity

Sure, with hindsight. But it didn't require much in the way of foresight to predict that some sort of problem would arise from the not-for-profit operating a hot startup that is by definition poorly aligned with the stated goals of the parent company. The writing was on the wall.


I think it could have easily been predicted just from the initial announcements. You can't create a public charity simply from the donations of a few wealthy individuals. A public charity has to meet the public support test. A private foundation would be a better model but someone decided they didn't want to go that route. Maybe should have asked a non-profit lawyer?


Maybe the vision is to eventually bring UBI into it and cap earn outs. Not so wild given Sam’s world coin and his UBI efforts when he was YC president.


The public support test for public charities is a 5-year rolling average, so "eventually" won't help you. The idea of billionaires asking the public for donations to support their wacky ideas is actually quite humorous. Just make it a private foundation and follow the appropriate rules. Bill Gates manages to do it and he's a dinosaur.


Exactly this. OpenAI was started for ostensibly the right reasons. But once they discovered something that would both 1) take a tremendous amount of compute power to scale and develop, and 2) could be heavily monetized, they choose the $ route and that point the mission was doomed, with the board members originally brought in to protect the mission holding their fingers in the dyke.


Speaks more to a fundamental misalignment between societal good and technological progress. The narrative (first born in the Enlightenment) about how reason, unfettered by tradition and nonage, is our best path towards happiness no longer holds. AI doomerism is an expression of this breakdown, but without the intellectual honesty required to dive to the root of the problem and consider whether Socrates may have been right about the corrupting influence of writing stuff down instead of memorizing it.

What's happening right now is people just starting to reckon with the fact that technological progress on it's own is necessarily unaligned with human interests. This problem has always existed, AI just makes it acute and unavoidable since it's no longer possible to invoke the long-tail of "whatever problem this fix creates will just get fixed later". The AI alignment problem is at it's core a problem of reconciling this, and it will inherently fail in absence of explicitly imposing non-Enlightenment values.

Seeking to build openAI as a nonprofit, as well as ousting Altman as CEO are both initial expressions of trying to reconcile the conflict, and seeing these attempts fail will only intensity it. It will be fascinating to watch as researchers slowly come to realize what the roots of the problem are, but also the lack of the social machinery required to combat the problem.


Wishfully I hope there was some intent from the beginning on exposing the impossibility of this contradictory model to the world, so that a global audience can evaluate on how to improve our system to support a better future.


> is by definition poorly aligned

If OpenAI is struggling to hard with the corporate alignment problem, how are they going to tackle the outer and inner alignment problems?


Well, I think that's really the question, isn't it?

Was it a mistake to create OpenAI as a public charity?

Or was it a mistake to operate OpenAI as if it were a startup?

The problem isn't really either one—it's the inherent conflict between the two. IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.


To create a public charity without public fundraising is a no go. Should have been a private foundation because that is where it will end up.


> IMO, the only reason to see creating it as a 501(c)(3) being a mistake is if you think cutting-edge machine learning is inherently going to be targeted by people looking to make a quick buck off of it.

I mean that's certainly been my experience of it thus far, is companies rushing to market with half-baked products that (allegedly) incorporate AI to do some task or another.


I was specifically thinking of people seeing a non-profit doing stuff with ML, and trying to finagle their way in there to turn it into a profit for themselves.

(But yes; what you describe is absolutely happening left and right...)


OpenAI the charity would have survived only as an ego project for Elon doing something fun with minor impact.

Only the current setup is feasible if they want to get the kind of investment required. This can work if the board is pragmatic and has no conflict of interest, so preferably someone with no stake in anything AI either biz or academic.


I think the only way this can end up is to convert to a private foundation and make sizable (8 figures annually) grants to truly independent AI safety (broadly defined) organizations.


> I think it was a real mistake to create OpenAI as a public charity and I would be hesitant to step into that mess.

I think it could have worked either as a non-profit or as a for-profit. It's this weird jackass hybrid thing that's produced most of the conflict, or so it seems to me. Neither fish nor fowl, as the saying goes.


Perhaps creating OpenAI as a charity is what has allowed it to become what it is, whereas other for-profit competitors are worth much less. How else do you get a guy like Elon Musk to 'donate' $100 million to your company?

Lots of ventures cut corners early on that they eventually had to pay for, but cutting the corners was crucial to their initial success and growth


Elon only gave $40 million, but since he was the primary donor I suspect he was the one who was pushing for the "public charity" designation. He and Sam were co-founders. Maybe it was Sam who asked Elon for the money, but there wasn't anyone else involved.


Are there any similar cases of this "non-profit board overseeing a (huge) for-profit company" model? I want to like the concept behind it. Was this inevitable due to the leadership structure of OpenAI, or was it totally preventable had the right people been on the board? I wish I had the historical context to answer that question.


Yes, for example Novo Nordisk is a pharmaceutical company controlled by a nonprofit, worth around $100B.

https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation

There are other similar examples like Ikea.

But those examples are for mature, established companies operating under a nonprofit. OpenAI is different. Not only does it have the for-profit subsidiary, but the for-profit needs to frequently fundraise. It's natural for fundraising to require renegotiations in the board structure, possibly contentious ones. So in retrospect it doesn't seem surprising that this process would become extra contentious with OpenAI's structure.


[flagged]


They are registered as a 501(c)(3) which is what people commonly call a public charity.

> Organizations described in section 501(c)(3) are commonly referred to as charitable organizations. Organizations described in section 501(c)(3), other than testing for public safety organizations, are eligible to receive tax-deductible contributions in accordance with Code section 170.

https://projects.propublica.org/nonprofits/organizations/810...


> They are registered as a 501(c)(3) which is what people commonly call a public charity.

TIL "public charity" is specific legal term that only some 501(c)(3) qualify as. To do so there are additional restrictions, including around governance and a requirement that a significant amount of funding come from small donors other charities or the government. In exchange a public charity has higher tax deductible giving limits for donors.


Important to note here that most large individual contributions are made through a DAF or donor-advised fund, which counts as a public source in the support test. This helps donors maximize their tax incentives and prevents the charity from tipping into private foundation status.


"Every section 501(c)(3) organization is classified as either a private foundation or a public charity."

https://www.irs.gov/charities-non-profits/eo-operational-req...


>...aren't even trying to pretend to be...

Suggests GP is not making a legal distinction, it's a description of how they are actually running things.


[deleted because it was wrong]


Their IRS determination letter says they are formed as a public charity and their 990s claim that they have met the "public support" test as a public charity. But there are some questions since over half of their support ($70 million) is identified as "other income" without the required explanation as to the "nature and source" of that income. Would not pass an IRS audit.


> They are registered as a 501(c)(3) which is what people commonly call a public charity.

Why do they do that? Seems ridiculous on the face of it. Nothing about 501(c)(3) entails providing any sort of good or service to society at large. In fact, the very same thing prevents them from competing with for-profit entities at providing any good or service to society at large. The only reason they exist at all is that for-profit companies are terrible at feeding, housing, and protecting their own labor force.


> Nothing about 501(c)(3) entails providing any sort of good or service to society at large.

Sure it does:

https://www.irs.gov/charities-non-profits/charitable-organiz...


> Nothing about 501(c)(3) entails providing any sort of good or service to society at large.

While one might disagree that the particular subcategories into which a 501c3 must fit into one of do, in fact, provide a good or service to society at large, that's the rationale for 501c3 and its categories. Its true that "charity" or "charitable organization" (and "charitable purpose"), the common terms (used even by the IRS) is pedantically incomplete, since the actual purpose part of the requirement in the statute is "organized and operated exclusively for religious, charitable, scientific, testing for public safety, literary, or educational purposes, or to foster national or international amateur sports competition (but only if no part of its activities involve the provision of athletic facilities or equipment), or for the prevention of cruelty to children or animals", but, yeah, it does require something which policymakers have judged to be a good or service that benefits society at large.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: