Komorebi AI
Sesgos en la IA

AI biases

 Technology and the use of AI , has brought major breakthroughs in today's world; however, it has also opened up a new discussion around ethics. In this post we will approach the challenge of AI Bias.

Due to the large range of possible uses and transformational power of AI, the growing adoption of this technology raises new ethical challenges. Within and against this framework, regulatory development for artificial intelligence has become a priority for many countries.

Although it is not a simple subject, the European Union is starting to sketch out some agreements on regulatory framework in the matter. Spain has developed a first draft of a legal framework on which companies are encouraged to invest and create AIs, with certain rules established.

Among regulatory guidelines, they state that humans must always be behind highly susceptible AIs. The level of risk must always be taken into consideration, especially when the human must evaluate ethical dilemmas and generate bias control.

We have seen many experimental samples performed, with the typical learning process of AIs. Often, the system is fed with historical data causing bias to be a hidden risk which is evidence of all the work that remains.

Now, we will have a look at some specific examples that illustrate some of the potential distortions faced by AI.


Race Bias - PULSE (Photo Upsampling via Latent Space Exploration)

PULSE is an AI-based tool capable of "depixelizing" our pictures. It allows you to upload a pixelated image with very low quality and then it turns it into a realistic photo.

So far so good. Where's the problem? It's very useful, but the application showed that when it came to increasing the resolution, the algorithm more often produced faces of white people than of African Americans (even though the low-resolution original was not white). Sometimes, when PULSE received images of African-Americans, Latinos and Asians, it converted them to white.

PULSE became famous not because of its features, in fact, it became viral because of one of the results it offered based on an image of Barack Obama, ex-president of the United States.

Lucy Lui is another example, who no longer has Asian features.


 

Gender Bias

A few years ago, Amazon was trying to automate and accelerate the recruitment process through AI. The system was designed to generate a rating system by stars for each candidate that allowed them to prioritize profiles.

The project sounded like a great idea. However, the algorithm was trained on the 10-year hiring history. Technology industry has traditionally been staffed by men, so the algorithm learned to prioritize men over women. Finally Amazon decided to discard the project considering that the hiring process still needs to have additional criteria where a traditional recruiter could not be replaced.



Google Translate

World's most used translator, it works with automatic learning by examples, it uses already translated texts available on-line, in order to obtain the context of the sentences and not only of words.

Google Translate certainly makes life easier for all of us, however when translating into gender-neutral languages that do not use male or female pronouns, the translator, which uses the data available on the web, tends to stereotype translations by gender.

For example: Hungarian uses the pronoun "ő" to refer to he or she, the translator decides on the pronoun based on the most commonly used terms on the web. Given this, he selects that the verb cook is conjugated with the pronoun she and the engineering profession is associated with the pronoun he.


 

Unequity Bias - Predictive Crime Algorithm

An article* was recently published by JSTOR.org highlighting some of the challenges and risks of using AI when applied to prevent and predict crime.

Based on historical data, the algorithm forecasts the locations where most crimes are likely to be perpetrated. Using this information, police allocate more resources to certain areas, which usually focus on low-income neighborhoods. This generates bias, as an increased police force in such areas increases the number of crimes uncovered. This data feeds back into the system and tends to reinforce stereotypes.

To summarize, we have presented in this post some examples of potential bias found in the execution of algorithm tests; however, we still have a long way to go in development and regulation.

To quote Audrey Azoulay, Director-General of Unesco, "The world needs rules for artificial intelligence to benefit humanity. The Recommendation on the Ethics of AI is an important response. "

As AI developers, we are not afraid of technological progress, because we are aware that the use of these new tools involves great challenges, not to mention a huge commitment, in which people will be playing a leading role in value generation and monitoring the implementation and proper use of AI in the diverse fields of application.

We will review other relevant ethical issues in AI deployment in future posts. If you have any comments, questions or would like to suggest a related interest topic, please do not hesitate to write to consulta@komorebi.ai, we will be happy to hear from you.

References:

Latest posts

IA & Tech

¿Automation vs. augmentation?

¿Cómo elegir entre automatizar o aumentar una tarea con la IA. La automatización implica que la IA haga el trabajo por nosotros, mientras que el

Leer más »
Ayuda a la docencia con ChatGPT
IA & Tech

Aprende a programar con ChatGPT

ChatGPT nos ha sorprendido a todos y está cambiando cómo entendemos cada uno las tareas que hacemos y el negocio en los que trabajamos. En

Leer más »