Why can't these safety advocates just say what they are afraid of? As it currently stands, the only "danger" in ChatGPT is that you can manipulate it into writing something violent or inappropriate. So what? Is this some San Francisco sensibilities here, where reading about fictional violence is equated to violence? The more people raise safety concerns in the abstract, the more I ignore it.
I'm familiar with the potential risks of an out-of-control AGI. Can you summarise in one paragraph which of these risks concern you, or the safety advocates, in regards to a product like ChatGPT?
They invented a whole theory of how if we had something called "AGI" it would kill everyone, and now they think LLMs can kill everyone because they're calling it "AGI", even though it doesn't work anything like their theory assumed.
This isn't about political correctness. It's far less reasonable than that.
Based on the downvotes I am getting and the links posted in the other comment, I think you are absolutely right. People are acting as if ChatGPT is AGI, or very close to it, therefore we have to solve all these catastrophic scenarios now.
Consider that your argument could also be used to advocate for safety of starting to use coal-fired steam engines (in 19th century UK): there's no immediate direct problem, but competitive pressures force everyone to use them and any externalities stemming from that are basically unavoidable.