GENDER BIAS IN ARTIFICIAL INTELLIGENCE: A CRITICAL PERSPECTIVE AND LEGAL ANALYSIS

Main Article Content

Dra. Trilce Fabiola Ovilla Bueno

Abstract

Artificial Intelligence (AI) is transforming key industries like employment, healthcare, and criminal justice, but it also introduces significant ethical and legal challenges, particularly regarding gender bias. AI systems, often trained on biased historical data, can perpetuate and even amplify existing gender inequalities. This essay examines the legal implications of gender bias in AI, focusing on challenges to anti-discrimination laws, transparency issues, and the need for regulatory oversight. Gender bias in AI arises when systems are trained on datasets that reflect societal inequalities, leading to discriminatory outcomes. This bias is not a technical flaw, but rather a consequence of using data and algorithms that mirror patterns of discrimination. The lack of diversity among AI developers, who are predominantly male, exacerbates this issue by failing to account for the perspectives and needs of women and marginalized groups.


In legal contexts, the use of AI in hiring, criminal justice, and risk assessment raises ethical concerns. AI-driven systems risk reinforcing historical gender biases, which can undermine fairness in decision-making processes. Unchecked, these biases could worsen disparities in critical areas such as recruitment and justice administration, threatening legal protections. A major legal challenge is how AI interacts with existing anti-discrimination laws. To address these challenges, transparency in AI decision-making is essential. Regulatory frameworks must evolve to require regular audits of AI systems and enforce accountability for biased outcomes. Ethical guidelines are insufficient; mandatory legal oversight is needed to ensure AI promotes fairness and inclusivity.

Article Details

How to Cite
Ovilla Bueno, D. T. F. (2024). GENDER BIAS IN ARTIFICIAL INTELLIGENCE: A CRITICAL PERSPECTIVE AND LEGAL ANALYSIS. Amicus Curiae. Revista Electrónica De La Facultad De Derecho, (26), 20–29. https://doi.org/10.22201/fder.23959045e.2024.26.90464

Citas en Dimensions Service

Author Biography

Dra. Trilce Fabiola Ovilla Bueno, Académica

Mexican lawyer. PhD in Law.  She holds a master's degree in Corporate Law and has master's studies in Constitutional Procedural Law. She obtained a specialized degree in Political Training: Europe-Latin America Perspectives. Full-time professor at the School of Law at UNAM.

References

Buolamwini, Joy y Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”, en Proceedings of the 1st Conference on Fairness, Accountability, and Transparency (PMLR), 2018 [en línea], <https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf>.

Dastin, Jeffrey, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women”, en Reuters, 10 de octubre, 2018 [en línea], <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G>.

Eubanks, Virginia, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poo, [s.d.], St. Martin’s Press, 2018.

European Commission, Ethics Guidelines for Trustworthy AI, 2020 [en línea], <https://ec.europa.eu/futurium/en/ai-alliance-consultation>.

Haraway, Donna, “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective”, en Feminist Studies, EUA, vol. 14, núm. 3, 1988.

Harding, Sandra, Whose Science? Whose Knowledge? Thinking from Women’s Lives, [s.d.], Cornell University Press, 1991.

Intersoft Consulting, General Data Protection Regulation (GDPR), Art. 22 GDPR, Automated Individual Decision-Making, Including Profiling, 2024 [en línea], <https://gdpr-info.eu/art-22-gdpr>.

Noble, Safiya Umoja, Algorithms of Oppression: How Search Engines Reinforce Racism, [s.d.], New York University Press, 2018.

U.S. Department of Labor, Equal Pay Act of 1963, 2024 [en línea], <https://www.dol.gov/agencies/wb/equal-pay-act>.

U.S. Equal Employment Opportunity Commission (EEOC), Title VII of the Civil Rights Act of 1964, 2024 [en línea], <https://www.eeoc.gov/statutes/title-vii-civil-rights-act-1964>.

Wachter, Sandra, Brent Mittelstadt y Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, en International Data Privacy Law, Oxford University Press, vol. 7, núm. 2, mayo, 2017 [en línea], <https://doi.org/10.1093/idpl/ipx005>.

Wachter-Boettcher, Sara, Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech, [s.d.], W.W. Norton & Company, 2017.