Ethically? I bet they will kill them thousands of times during development.
Suppose you, at one stage, have a simulation of a brain that isn't quite there; it talks and sees, but it's audio system doesn't work right. What do you do?
I don't see anything unethical about shutting it off. If nobody is emotionally attached to it, and it doesn't suffer when it is shut off, who is harmed by the shutdown?
As long as they do not care that they could be "shut off", I see nothing wrong with it. If they dislike that notion (like real humans do), then the possibility of shutdown would cause suffering and would be immoral/unethical to allow.
You're assuming that the machines will care about being shut off - we would probably design them so that they don't care about this, because this makes them easier to work with. And then it's no longer unethical.
Suppose you, at one stage, have a simulation of a brain that isn't quite there; it talks and sees, but it's audio system doesn't work right. What do you do?
Even live debugging to repair it can be controversial (http://en.wikipedia.org/wiki/Cochlear_implant#Criticism_and_...)