This is a valid concern, but we’ve always been very serious about consent and privacy. Our models cannot be used without explicit verbal/visual consent and you hold the keys to your clone.
No snark intended...if you're making it much easier to make clones of people verbally and visually, why would I feel confident in you accepting a verbal/visual consent from "me"?
If it doesn't run on my computer, what keys are you talking about? Cryptographic keys? It would be interesting to see an AI agent run on fully homomorphic encryption if the overhead weren't so huge - would stop cloud companies from having so many intimate, personal data of all sorts of people.
Probably the phrase "you hold the keys to your clone" should give anyone pause.
I once worked at a company where the head of security gave a talk to every incoming technical staff member and the gist was, "You can't trust anyone who says they take privacy seriously. You must be paranoid at all times." When you've been around the block enough times, you realize they were right.
You can guarantee you won't be hacked? You can guarantee that if the company becomes massively successful, you won't start selling data to third parties ten years down the road?