Microsoft said they refused to provide the tech due to the bias against minorities caused by the training data.
How difficult is it to fix this bias? For example, the model can be told to only produce a match when a confidence level is higher than a certain threshold. Then the threshold can be increased as needed on those subsets of faces where training data is lacking. Would that work?
Also, why not build more diverse training data if this is a pervasive problem? It is not free, but neither is it cost prohibitive for someone like Microsoft.
How difficult is it to fix this bias? For example, the model can be told to only produce a match when a confidence level is higher than a certain threshold. Then the threshold can be increased as needed on those subsets of faces where training data is lacking. Would that work?
Also, why not build more diverse training data if this is a pervasive problem? It is not free, but neither is it cost prohibitive for someone like Microsoft.