Yes, that is the key issue: companies won't use such a module unless it is verifiable and does what it claims to do. I guess the first step would be to propose an open standard and a sample implementation of such a module. I don't think we're there yet though.
I'm hoping to focus my PhD on trying to come up with a solution to address the issue above. In other words, how can you design chips that can be verified (at all levels) without exposing your IP to a third-party? Furthermore, can this be done at runtime; e.g., could there be a syscall that queries the state of the hardware your software is running on? Cisco is one company that is particularly interested in solutions to both of these problems and is funding multiple research groups to explore these issues.
The issue of trust is solved if you can find a trustworthy intermediary, like an insurance company, to financially guarantee your products against compromise. Insurance is the tried and true method for transferring risk from one party to another. I'm certain there are plenty of customers who would find an insurance by eg. AIG sufficient to do business, without needing to know the internals of the hardware you produce. I know I would.
Edit: I now see that you are referring to someone attacking the module, rather than the module having a backdoor. I agree that insurance is a good way to avoid financial loss, but it doesn't at all address the backdoor issue.
> The issue of trust is solved if you can find a trustworthy intermediary
No it isn't solved at all, because that assumption breaks down very easily, especially now that we know for a fact how invasive surveillance and backdoors have become.
For example, a Chinese company who would like to use such a product would reject a certification by a US or European insurance company, and rightly so. The same applies to a US company with Chinese insurance. The requirements for trust become exceedingly more difficult to meet once you start dealing with military contractors, law enforcement, etc. So where do you propose insuring the hardware module? The US? What if China proves to be a larger market? How about if you want to sell the tech in the EU? It's a rabbit hole of "trust" imo.
This is why an objective verification function would make things much more straightforward for chip designers and fabless semiconductor IP companies. And if you can objectively verify the hardware at runtime, you get even more useful guarantees.
I completely understand that the use of a trustworthy third-party is sometimes necessary, such as in X.509, but when it comes to circuit design, I think we need to and can do better than that.
I'm hoping to focus my PhD on trying to come up with a solution to address the issue above. In other words, how can you design chips that can be verified (at all levels) without exposing your IP to a third-party? Furthermore, can this be done at runtime; e.g., could there be a syscall that queries the state of the hardware your software is running on? Cisco is one company that is particularly interested in solutions to both of these problems and is funding multiple research groups to explore these issues.