Netflix documentary Coded bias (2020) is directed by Shalini Kantayya And examines the racist and sexist bias of artificial intelligence (AI) behind facial recognition algorithms. The plot may be part of the dystopian series black Mirror, But it is the subject of heated moral debate in the real world.
Researcher Massachusetts Institute of Technology (With), Joey Buolamwini The research that drives the film presents flaws discovered in this technique.
Coded bias Shows how Buolamini perceived the problem of facial recognition: in a task in MIT Media Lab, She puts her face in front of the screen with an artificial intelligence device, but is not recognized.
When she puts on the white mask, the system can detect it. Thus, the researcher began to realize that artificial intelligence programs are Trained to identify patterns Based on a set of data (of white men) and therefore, they do not accurately identify female or black faces.
“The recognition technique was created using a small sample of black faces and women. This fact prevents high hit rates. It was a skewed choice. However, it is possible to feed back the algorithm to reduce bias and improve classification ”, says Simon Dennis Junqueira Barbosa, Expert in the field of human-computer interaction and Professor A.T. PUC-Rio.
In addition to Bulamwini’s study, the documentary features several works by other researchers and activists who struggle against the erratic use of recognition technology. it is a matter Silky Carlo. She . Is director of Big brother watch, An initiative that monitors the use of facial recognition by UK police. The concern worldwide is that the recognition technology used for public safety will accuse and arrest suspects based on inaccurate analysis.
AI is already used to determine whether a person is entitled to receive credit from the bank, whether a suspect should be arrested and whether the patient should be given priority in hospital care.
“Larger companies use recognition to score employees. Law enforcement officers find fugitive criminals. Doctors are able to identify diseases through images. When there is a more balanced mass of data, we can find many biases Can be minimized. But obtaining this data is a big challenge to avoid bias in the algorithm ”, Barbosa says.
PUC-Rio has one research Group Ethics and algorithmic mediation of social processes (EMAPS) Dedicated to the subject.
“Multidisciplinary teams are involved in research. We leave a digital footprint all the time, which makes ethical considerations more urgent and important. Data is valuable because it can be used by companies or governments to manipulate our behavior. And we should avoid these manipulations ”, explains Barbosa.
A movie Coded bias Gives example: In China, protests are held in Hong Kong with masked protesters to prevent facial recognition. Recognition technology is used by the Chinese government to arrest suspects. In these demonstrations groups are organizing to spray security cameras. In the United States, a condominium in Brooklyn, where black and latino populations prevail, uses facial recognition without the consent of residents.
According to documentary data, more than 117 million people In the United States, their faces occur in facial recognition networks that police can access. On June 25, 2020, under the influence of Buolamwini’s research, which analyzed heterogeneous data from several technology companies, US lawmakers introduced a bill banning the federal use of facial recognition.
Not even in brazil Specific regulation for technology, but Information Security Act Talks about the topic and demands greater transparency of practices adopted by companies.
To learn more about the research presented in Coded Prejudice
Book Arithmetic weapons of destruction (Mass destruction algorithms, In the Portuguese version), Mathematics by Cathy O’Neila
Purchase Here
Book Artificial Intelligence: How Computers Misunderstand the World? (Artificial Disintegration: How Computers Misunderstand the World?, In free translation), by journalist Meredith Broussard
Purchase Here
Book Algorithms of Harassment: How Search Engines Strengthen Racism (Algorithm of Harassment: How Search Engines Reinforce RacismO, in free translation), by Professor Safia Umoja Noble of the University of California at Los Angeles
Purchase Here
I want a scholarship. Graduate, Graduate and Technical. Scholarships with discounts of up to 75% at over 1100 colleges across Brazil. know more!