My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.
My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.
You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:
> Broadly distributed benefits
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Hell, it's the first bullet point on it!
You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'
Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.
Sure. And as I addressed at the start of this sub thread, I don't exactly think that the OpenAi board is perfectly positioned to navigate this problem.
I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.
My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.