The Logic of Forgetting
power, data governance, and the decolonial critique of artificial intelligence
DOI:
https://doi.org/10.46230/lef.v17i3.16015Keywords:
artificial intelligence, algorithmic bias, social invisibility, algorithmic justiceAbstract
The expansion of Artificial Intelligence (AI) into critical domains like healthcare and security raises urgent concerns about equity and justice. This article argues that algorithmic biases are not mere technical errors but manifestations of historical structural inequalities, reflected in datasets that function as sociotechnical artifacts. The research investigates two central mechanisms of exclusion: distorted representation, which includes groups in stereotyped ways, and selective invisibility, which systematically erases entire populations from records. Using a theoretical framework that articulates Critical Algorithm Studies, Data Feminism, and the Decolonial critique, we analyze an analytical mosaic of emblematic cases, including facial recognition, automated recruitment, and the omission of LGBTQIA+ data. We demonstrate the existence of a feedback loop of inequality, arguing that purely technical 'debiasing' solutions are insufficient, often masking the political choices embedded within them. Finally, the paper proposes structural guidelines for algorithmic justice, focusing on data sovereignty, effective contestability, and democratic governance.
Downloads
References
AJALA, F.; NYERERE, I. Digital Harvests, Digital Shadows: the risks of data colonialism in african smallholder agriculture. Nairobi: African Centre for Technology Studies, 2022.
AMOORE, L. Cloud Ethics: algorithms and the attributes of ourselves and others. Durham: Duke University Press, 2020. DOI: https://doi.org/10.1515/9781478009276
BAROCAS, S.; HARDT, M.; NARAYANAN, A. Fairness and Machine Learning: limitations and opportunities. Cambridge: MIT Press, 2019. Disponível em: https://fairmlbook.org. Acesso em: 22 out. 2025.
BENJAMIN, R. Race After Technology: abolitionist tools for the new Jim Code. Cambridge: Polity Press, 2019.
BIRHANE, A. Algorithmic Colonization of Africa. SCRIPTed, v. 17, n. 2, p. 389-405, 2020. Disponível em: https://script-ed.org/article/algorithmic-colonization-of-africa/. Acesso em: 23 out. 2025. DOI: https://doi.org/10.2966/scrip.170220.389
BOLUKBASI, T.; CHANG, K-W.; ZOU, J.; SALIGRAMA V.; KALAI A. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In: Conference On Neural Information Processing Systems, 30., 2016, Barcelona. p. 4349–4357, 2016. Disponível em: https://papers.nips.cc/paper/2016/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html. Acesso em: 22 out. 2025.
BUOLAMWINI, J.; GEBRU, T. Gender Shades: intersectional accuracy disparities in commercial gender classification. In: Conference On Fairness, Accountability, And Transparency, 2018, New York. Proceedings [...]. Cambridge: PMLR, 2018. v. 81, p. 77-91.
CARROLL, S. R.; GARBA, I.; FIGUEROA-RODRÍGUEZ, O. L.; HOLBROOK, J.; LOVETT, R.; MATERECHERA, S.; PARSONS, M.; RASEROKA, K.; RODRIGUEZ-LONEBEAR, D.; ROWE, R.; SARA, R.; WALKER, J. D.; ANDERSON, J.; HUDSON, M. The CARE principles for indigenous data governance. Data science journal, v. 19, p. 2-12 2020. Disponível em: https://datascience.codata.org/articles/1158/files/submission/proof/1158-1-8528-2-10-20201104.pdf. Acesso em: 28 out. 2025.
CARVALHO, A. A. de; BARRETO, R. C. V. A invisibilidade das pessoas LGBTQIA+ nas bases de dados: novas possibilidades na Pesquisa Nacional de Saúde 2019?. Ciência & Saúde Coletiva, v. 26, p. 4059-4064, 2021. Disponível em: https://www.scielo.br/j/csc/a/rwDkNhDCdyY5xdfyXNxmmGH/?format=html&lang=pt. Acesso em: 28 out. 2025. DOI: https://doi.org/10.1590/1413-81232021269.12002021
CITRON, D. K. Technological due process. Wash. UL Rev., v. 85, p. 1249, 2007. Disponível em: https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1166&context=law_lawreview. Acesso em: 28 out. 2025.
COSTANZA-CHOCK, S. Design justice: community-led practices to build the worlds we need. The MIT Press, 2020. DOI: https://doi.org/10.7551/mitpress/12255.001.0001
COULDRY, N.; MEJIAS, U. A. The Costs of Connection: how data is colonizing human life and appropriating it for capitalism. Stanford: Stanford University Press, 2019. DOI: https://doi.org/10.1515/9781503609754
CRAWFORD, K. Atlas of AI: power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press, 2021. DOI: https://doi.org/10.12987/9780300252392
DASTIN, J. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, 10 out. 2018. Disponível em: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. Acesso em: 7 jul. 2025.
DE LIMA VIANA, G. M.; SPERANDEO DE MACEDO, C. Inteligência artificial e a discriminação algorítmica: : uma análise do caso amazon. Direito & TI, [S. l.], v. 1, n. 19, p. 39–62, 2024. DOI: 10.63451/ti.v1i19.212. Disponível em: https://www.direitoeti.com.br/direitoeti/article/view/212. Acesso em: 4 nov. 2025. DOI: https://doi.org/10.63451/ti.v1i19.212
DENIS, C.; ELIE R.; HEBIRI, M.; HU F. Fairness guarantees in multi-class classification with demographic parity. Journal of Machine Learning Research, v. 25, n. 130, p. 1- 46, 2024. Disponível em: https://arxiv.org/abs/2109.13642. Acesso em: 28 out. 2025.
D'IGNAZIO, C.; KLEIN, L. F. Data Feminism. Cambridge: The MIT Press, 2020. DOI: https://doi.org/10.7551/mitpress/11805.001.0001
FERRETTI, V.; MONTIBELLER, G.; VON WINTERFELDT, D. Testing the effectiveness of debiasing techniques to reduce overprecision in the elicitation of subjective continuous probability distributions. European Journal of Operational Research, v. 304, n. 2, p. 661-675, 2023. Disponível em: https://www.sciencedirect.com/science/article/pii/S0377221722003046. Acesso em 28 out. 2025. DOI: https://doi.org/10.1016/j.ejor.2022.04.008
FJELD, J.; ACHTEN, N.; HILLIGOSS, H.; NAGY, A. C.; SRIKUMAR, M. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, n. 2020-1, 2020. Disponível em: https://www.researchgate.net/publication/339138141_Principled_Artificial_Intelligence_Mapping_Consensus_in_Ethical_and_Rights-Based_Approaches_to_Principles_for_AI. Acesso em: 28 out. 2025. DOI: https://doi.org/10.2139/ssrn.3518482
GEBRU, T.; MORGENSTERN, J.; VECCHIONE B.; VAUGHAN, J. W. WALLACH, H.; DAUMÉ III, H.; CRAWFORD, K. Datasheets for Datasets. Communications of the ACM, New York, v. 64, n. 12, p. 86–92, dez. 2021. Disponível em: https://arxiv.org/abs/1803.09010. Acesso em: 28 out. 2025. DOI: https://doi.org/10.1145/3458723
GUPTA, N.; MUJUMDAR, S.; PATEL, H.; MASUDA, S.; PANWAR, N.; BANDYOPADHYAY, S.; MEHTA, S.; GUTTULA, S.; AFZAL, S.; MITTAL, R. S.; MUNIGALA, V. Data quality for machine learning tasks. In: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. The ACM Digital Library. KDD '21, p. 4040-4041. 2021. Disponível em: https://dl.acm.org/doi/abs/10.1145/3447548.3470817. Acesso em: 30 out. 2025. DOI: https://doi.org/10.1145/3447548.3470817
HARDT, M.; PRICE, E.; SREBRO, N. Equality of opportunity in supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. v. 29, p. 3323–3331, 2016. Disponível em: https://dl.acm.org/doi/abs/10.5555/3157382.3157469. Acesso em: 30 out. 2025.
HATZENBUEHLER, M. L; LATTANNER, M. R; MCKETTA, S.; PACHANKIS, J. E. Structural stigma and LGBTQ+ health: a narrative review of quantitative studies. The Lancet Public Health, [S.L.], v. 9, n. 2, p. 109-127, fev. 2024. Elsevier BV. http://dx.doi.org/10.1016/s2468-2667(23)00312-2. Disponível em: https://www.thelancet.com/action/showPdf?pii=S2468-2667%2823%2900312-2. Acesso em: 04 nov. 2025. YIN, R. K. Case Study Research and Applications: design and methods. 6. ed. Thousand Oaks: Sage, 2018. DOI: https://doi.org/10.1016/S2468-2667(23)00312-2
HEEKS, R.; RENKEN, J. Data justice for development: What would it mean? Information Development, v. 34, n. 1, p. 90-102, 2018. Disponível em: https://journals.sagepub.com/doi/abs/10.1177/0266666916678282. Acesso em: 30 out. 2025. DOI: https://doi.org/10.1177/0266666916678282
IYER, N.; ACHIENG, G.; BOROKINI, F.; LUDGER, U. Automated imperialism, expansionist dreams: exploring digital extractivism in Africa. POLLICY, 2021. Disponível em: https://archive.pollicy.org/wp-content/uploads/2021/06/Automated-Imperialism-Expansionist-Dreams-Exploring-Digital-Extractivism-in-Africa.pdf. Acesso em: 30 out. 2025.
MBEMBE, A. Necropolítica. São Paulo: n-1 Edições, 2019.
MEHRABI, N.; MORSTATTER, F.; SAXENA, N.; LERMAN, K.; GALSTYAN, A. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), v. 54, n. 6, p. 1-35, 2021. Disponível em: https://dl.acm.org/doi/10.1145/3457607. Acesso em: 30 out. 2025. DOI: https://doi.org/10.1145/3457607
MITCHELL, M.; WU, S.; ZALDIVAR, A.; BARNES, P.; VASSERMAN, L.; HUTCHINSON, B.; SPITZER, E.; RAJI, I. D.; GEBRU, T. Model Cards for Model Reporting. In: Conference On Fairness, Accountability, And Transparency, 2019, Atlanta. Proceedings [...]. New York: ACM, 2019. p. 220–229. Disponível em: https://arxiv.org/abs/1810.03993. Acesso em: 30 out. 2025. DOI: https://doi.org/10.1145/3287560.3287596
NOBLE, S. U. Algorithms of Oppression: how search engines reinforce racism. New York: New York University Press, 2018. DOI: https://doi.org/10.18574/nyu/9781479833641.001.0001
ORR, W.; CRAWFORD, K. The social construction of datasets: on the practices, processes, and challenges of dataset creation for machine learning. New Media & Society, v. 26, n. 9, p. 4955-4972, 2024. DOI: https://doi.org/10.1177/14614448241251797
PASQUALE, F. The Black Box Society: the secret algorithms that control money and information. Cambridge: Harvard University Press, 2015. DOI: https://doi.org/10.4159/harvard.9780674736061
QUIJANO, A. A colonialidade do saber: eurocentrismo e ciências sociais. Perspectivas latino-americanas. Buenos Aires: CLACSO, p. 117-142, 2005. Disponível em: https://ufrb.edu.br/educacaodocampocfp/images/Edgardo-Lander-org-A-Colonialidade-do-Saber-eurocentrismo-e-ciC3AAncias-sociais-perspectivas-latinoamericanas-LIVRO.pdf. Acesso em: 30 out. 2025.
RAJI, I. D.; BUOLAMWINI, J. Actionable Auditing: investigating the impact of publicly naming biased performance results of commercial AI products. In: AAAI/ACM Conference On Ai, Ethics, And Society, 2019, Honolulu. Proceedings [...]. New York: ACM, 2019. p. 429-435. Disponível em: https://www.media.mit.edu/publications/actionable-auditing-investigating-the-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/. Acesso em: 30 out. 2025. DOI: https://doi.org/10.1145/3306618.3314244
RAMIRO, A.; CRUZ, L. The grey-zones of public-private surveillance: Policy tendencies of facial recognition for public security in Brazilian cities. Internet Policy Review, v. 12, n. 1. p. 1-28, 2023. Disponível em: https://policyreview.info/pdf/policyreview-2023-1-1705.pdf. Acesso em: 30 out. 2025. DOI: https://doi.org/10.14763/2023.1.1705
REDE de Observatórios Da Segurança. Pele Alvo: a cor da violência policial. [S.l.]: Rede de Observatórios da Segurança, 2021. Disponível em: https://redeobservatorios.com.br/wp-content/uploads/2022/08/Pele-Alvo-A-cor-da-violencia-policial.pdf. Acesso em: 7 jul. 2025.
ROCHE, C.; WALL, P. J.; LEWIS, D. Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI and Ethics, v. 3, n. 4, p. 1095-1115, 2023. Disponível em: https://pubmed.ncbi.nlm.nih.gov/36246014/. Acesso em: 30 out. 2025. DOI: https://doi.org/10.1007/s43681-022-00218-9
ROSA, A. M. da; GUASQUE, B. Inteligência artificial, vieses algorítmicos e racismo: o lado desconhecido da justiça algorítmica. Opinión Jurídica, v. 23, n. 50, p. 1-23, 2024. Disponível em: http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S1692-25302024000200008. Acesso em: 30 out. 2025. DOI: https://doi.org/10.22395/ojum.v23n50a49
SELBST, A. D. Disparate impact in big data policing. Ga. L. Rev., v. 52, p. 109, 2017. Disponível em: https://dlg.usg.edu/record/ugalaw_glr_vol52-iss1-6. Acesso em: 30 out. 2025.
SCHEUERMAN, M. K.; JIANG, J. A.; FIESLER, C.; BRUBAKER, J. R. ‘What is it about ‘lesbian’ that is so scary?’: How AI-driven content moderation systems fail LGBTQ+ creators. Proceedings of the ACM on Human-Computer Interaction, New York, v. 5, n. CSCW2, art. 422, p. 368:1–35, out. 2021. Disponível em: https://arxiv.org/pdf/2108.04401. Acesso em: 30 out. 2025. DOI: https://doi.org/10.1145/3479512
VERMA, S.; RUBIN, J. Fairness definitions explained. In: IEEE/ACM INTERNATIONAL WORKSHOP ON SOFTWARE FAIRNESS, 2018, Gotemburgo. Proceedings [...]. New York: IEEE, 2018. p. 1-7. Disponível em: https://dl.acm.org/doi/10.1145/3194770.3194776. Acesso em: 01 nov. 2025. DOI: https://doi.org/10.1145/3194770.3194776
WANG, A.; RAMASWAMY, V. V.; RUSSAKOVSKY, O. Towards intersectionality in machine learning: including more identities, handling underrepresentation, and performing evaluation. In: Proceedings of the 2022 ACM conference on fairness, accountability, and transparency. 2022. p. 336-349. Disponível em: https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533101. Acesso em: 01 nov. 2025. DOI: https://doi.org/10.1145/3531146.3533101
YIN, R. K. Case Study Research and Applications: design and methods. 6. ed. Thousand Oaks: Sage, 2018.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Jefferson Igor Duarte Silva, Akynara Aglaé Rodrigues Santos da Silva Burlamaqui, Aquiles Medeiros Filgueira Burlamaqui

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish in Linguagem em Foco Scientific Journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication. The articles are simultaneously licensed under the Creative Commons Attribution License which allows sharing the work with an acknowledgement of its authorship and initial publication in this journal.
- The concepts issued in signed articles are the absolute and exclusive responsibility of their authors. Therefore, we request a Statement of Copyright, which must be submitted with the manuscript as a Supplementary Document.
- Authors are authorized to make the version of the text published in Linguagem em Foco Scientific Journal available in institutional repositories or other academic work distribution platforms (ex. ResearchGate, Academia.edu).


















