That's the thing though. How does one weigh the good against the bad in these cases?
If you're building stuff at the application layer, maybe the use is obvious, but if you're writing a library or a service, how can you know how it will be used? Should you expend time enumerating and assigning probabilistic weights to all the good and evil that could come from it?
Far simpler proscription: as a human using a tool, don't do evil with the tool.
You do your best, along whatever axes are situationally appropriate.
For what it's worth, a sufficiently generic tool I think tends to balance toward morally positive, because there is more intent to do good than intent to do harm out there. But of course, helping to grow that disparity is still important, which is why you should be looking to see if there's ways in which your tool radically, disproportionately facilitates harm.
If you're building stuff at the application layer, maybe the use is obvious, but if you're writing a library or a service, how can you know how it will be used? Should you expend time enumerating and assigning probabilistic weights to all the good and evil that could come from it?
Far simpler proscription: as a human using a tool, don't do evil with the tool.