Your comment perfectly justifies never worrying at all about the potential for existential or major risks; after all, one would be wrong most of the time and just engaging in zealotry.
So what do you mean when you say that the "risk is proven"?
If by "the risk is proven" you mean there's more than a 0% chance of an event happening, then there are almost an infinite number of such risks. There is certainly more than a 0% risk of humanity facing severe problems with an unaligned AGI in the future.
If it means the event happening is certain (100%), then neither a meteorite impact (of a magnitude harmful to humanity) nor the actual use of nuclear weapons fall into this category.
If you're referring only to risks of events that have occurred at least once in the past (as inferred from your examples), then we would be unprepared for any new risks.
In my opinion, it's much more complicated. There is no clear-cut category of "proven risks" that allows us to disregard other dangers and justifiably see those concerned about them as crazy radicals.
We must assess each potential risk individually, estimating both the probability of the event (which in almost all cases will be neither 100% nor 0%) and the potential harm it could cause. Different people naturally come up with different estimates, leading to various priorities in preventing different kinds of risks.
No, I mean that there is a proven way for the risk to materialise, not just some tall tale. Tall tales might(!) justify some caution, but they are a very different class of issue. Biological risks are perhaps in the latter category.
Also, as we don't know the probabilities, I don't think they are a useful metric. Made up numbers don't help there.
Edit: I would encourage people to study some classic cold war thinking, because that relied little on probabilities, but rather on trying to avoid situations where stability is lost, leading to nuclear war (a known existential risk).
"there is a proven way for the risk to materialise" - I still don't know what this means. "Proven" how?
Wouldn't your edit apply to any not-impossible risk (i.e., > 0% probability)? For example, "trying to avoid situations where control over AGI is lost, leading to unaligned AGI (a known existential risk)"?
You can not run away from having to estimate how likely the risk is to happen (in addition to being "known").
Proven means all parts needed for the realisation of the risk are known and shown to exist (at least in principle,
in a lab etc.). There can be some middle ground where a large part is known and shown to exist (biological risks, for example).), but not all.
No in relation to my edit, because we have no existing mechanism for the AGI risk to happen. We have hypotheses about what an AGI could or could not do. It could all be incorrect. Playing around with likelihoods that have no basis in reality isn't helping there.
Where we have known and fully understood risks and we can actually estimate a probability there we might use that somewhat to guide efforts (but that invites potentially complacency that is deadly).
Nukes and meteorites have very few components that are hard to predict. One goes bang almost entirely on command and the other follows Newton's laws of motion. Neither actively tries to effect any change in the world, so the risk is only "can we spot a meteorite early enough". Once we do, it doesn't try to evade us or take another shot at goal. A better example might be covid, which was very mildly more unpredictable than a meteor, and changed its code very slowly in a purely random fashion, and we had many historical examples of how to combat.
Existential risks are usually proven by the subject being extinct at which point no action can be taken to prevent it.
Reasoning about tiny probabilities of massive (or infinite) cost is hard because the expected value is large, but just gambling on it not happening is almost certain to work out. We should still make attempts at incorporating them into decision making because tiny yearly probabilities are still virtually certain to occur at larger time scales (eg. 100s-1000s of years).
Are we extinct? No. Could a large impact kill us all? Yes.
Expected value and probability have no place in these discussions. Some risks we know can materialize, for others we have perhaps a story on what could happen. We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.
>We need to clearly distinguish between where there is a proven mechanism for doom vs where there is not.
How do you prove a mechanism for doom without it already having occurred? The existential risk is completely orthogonal to whether it has already happened, and generally action can only be taken to prevent or mitigate before it happens. Having the foresight to mitigate future problems is a good thing and should be encouraged.
>Expected value and probability have no place in these discussions.
I disagree. Expected value and probability is a framework for decision making in uncertain environments. They certainly have a place in these discussions.
I disagree that there is orthogonality. Have we killed us all with nuclear weapons, for example? Anyone can make up any story - at the very least there needs to be a proven mechanism. The precautionary principle is not useful when facing totally hypothetically issues.
People purposefully avoided probabilities in high risk existential situations in the past. There is only one path of events and we need to manage that one.
Probability is just one way to express uncertainties in our reasoning. If there's no uncertainty, it's pretty easy to chart a path forward.
OTOH, The precautionary principle is too cautious.
There's a lot of reason to think that AGI could be extremely destabilizing, though, aside from the "Skynet takes over" scenarios. We don't know how much cushion there is in the framework of our civilization to absorb the worst kinds of foreseeable shocks.
This doesn't mean it's time to stop progress, but employing a whole lot of mitigation of risk in how we approach it makes sense.
The simplest is pretty easy to articulate and weigh.
If you can make a $5,000 GPU into something that is like an 80IQ human overall, but with savant-like capabilities in accessing math, databases, and the accumulated knowledge of the internet, and that can work 24/7 without distraction... it will straight-out replace the majority of the knowledge workforce within a couple of years.
The dawn of industrialism and later the information age were extremely disruptive, but they were at least limited by our capacity to make machines or programs for specific tasks and took decades to ramp up. An AGI will not be limited by this; ordinary human instructions will suffice. Uptake will be millions of units per year replacing tens of millions of humans. Workers will not be able to adapt.
Further, most written communication will no longer be written by humans; it'll be "code" between AI agents masquerading as human correspondence, etc. The set of profound negative consequences is enormous; relatively cheap AGI is a fast-traveling shock that we've not seen the likes of before.
For instance, I'm a schoolteacher these days. I'm already watching kids becoming completely demoralized about writing; as far as they can tell, ChatGPT does it better than they ever could (this is still false, but a 12 year old can't tell the difference)-- so why bother to learn? If fairly-stupid AI has this effect, what will AGI do?
And this is assuming that the AGI itself stays fairly dumb and doesn't do anything malicious-- deliberately or accidentally. Will bad actors have their capabilities significantly magnified? If it acts with agency against us, that's even worse. If it exponentially grows in capability, what then?
I just don't know what to do with the hypotheticals. It needs the existence of something that does not exist, it needs a certain socio-economic response and so forth.
Are children equally demoralized about additions or moving fast than writing? If not, why? Is there a way to counter the demoralization?
> It needs the existence of something that does not exist,
Yes, if we're concerned about the potential consequences of releasing AGI, we need to consider the likely outcomes if AGI is released. Ideally we think about this some before AGI shows up in a form that it could be released.
> it needs a certain socio-economic response and so forth.
Absent large interventions, this will happen.
> Are children equally demoralized about additions
Absolutely basic arithmetic, etc, has gotten worse. And emerging things like photomath are fairly corrosive, too.
> Is there a way to counter the demoralization?
We're all looking... I make the argument to middle school and high school students that AI is a great piece of leverage for the most skilled workers: they can multiply their effort, if they are a good manager and know what good work product looks like and can fill the gaps; it works somewhat because I'm working with a cohort of students that can believe that they can reach this ("most-skilled") tier of achievement. I also show students what happens when GPT4 tries to "improve" high quality writing.
OTOH, these arguments become much less true if cheap AGI shows up.
As a said in another post: Some middle ground because we don't know if that is possible to the extent that it is existential. Parts of the mechanisms are proven, others are not. And actually we do police the risk somewhat like that (controls are strongest where the proven part is strongest and most dangerous with extreme controls around small pox, for example).