Researchers at MIT developed an algorithm to correct racial biases in facial recognition systems
The program identifies the limitations in the database and then generates a new more balanced sampling to retrain artificial intelligence.
Algorithms can replicate racial biases. There are dozens of examples that show how artificial intelligence can discriminate in favor or against certain groups. Sometimes this happens because of poor training, with a limited database. Although, sometimes, it could also be an intentional limitation on the part of those who program them.
A week ago, US congresswoman Alexandria Ocasio-Cortez raised the issue of racial bias in facial recognition algorithms and spoke of the need to find a solution to this issue.
In line with this idea, MIT researchers presented a system that allows identifying and removing, automatically, the bias that may exist in a data set and generating a more balanced and representative sample of diversity to retrain the algorithms.
The algorithm they developed can learn both a specific task, such as face detection, as well as the structure behind the training data, which allows you to identify and minimize any hidden bias.
In the document it is remarked that, in the tests carried out, the algorithm could reduce by more than 60% the “categorical bias” in comparison with the results obtained with the facial detection models that are commonly used.
To train artificial intelligence, currently requires some level of human intervention to define databases and limitations or specific filters that the system wants to learn.
This algorithm, developed by the MIT team, can analyze a database, identify the biases or limitations that are found there and do a new sampling to make the result more equitable or fair.