Algorithmic representations of disability
an analysis of images generated by Artificial Intelligence
DOI:
https://doi.org/10.46230/lef.v17i3.16013Keywords:
artificial intelligence, disability, algorithmic biasesAbstract
This article investigates how Generative Artificial Intelligence (GAI) systems represent—or omit—people with disabilities, revealing how these technologies reinforce normative patterns and contribute to the symbolic exclusion of divergent corporealities and experiences. The research is grounded in studies on algorithmic bias (Faustino; Lippold, 2023; Noble, 2018; Silva, 2022; Tevissen, 2024) and algorithmic justice applied to disability (Packin, 2021; Guimarães, 2024), with special attention to the effects of absences and stereotypes in image generation systems. The methodology adopted is Content Analysis, according to Bardin (2011), structured into categories based on the biopsychosocial model with definitions adopted by the National Health Survey (PNS). The corpus consisted of texts and images generated from prompts in the AI tools ChatGPT and Gemini, aiming to represent people with disabilities. The analysis revealed a predominance of invisibility and stigmatization, highlighting the reproduction of ableist imaginaries and the absence of inclusive visual references in training models. The results indicate that such technologies reflect and amplify social biases, requiring an ethical and technical review in the development of these systems, with the active inclusion of people with disabilities at all stages of the process.
Downloads
References
BARON, I. Eu fui testar o aplicativo do momento para criar minha própria versão na inteligência artificial e, pasmem, encontrei um Ivan nada a ver com o da realidade. É esse o avanço que teremos no futuro? Instagram, 5 jul. 2023. Disponível em: https://eNA74V.short.gy/Cf2n8U. Acesso em: 18 jul 2025.
BARDIN, L. Análise de conteúdo. São Paulo: Edições 70, 2011.
BODEN, M. Inteligência Artificial - uma brevíssima introdução. São Paulo: Editora Unesp, 2020.
BRASIL. Decreto nº 6.949, de 25 de agosto de 2009. Promulga a Convenção Internacional sobre os Direitos das Pessoas com Deficiência e seu Protocolo Facultativo, assinados em Nova York, em 30 de março de 2007. Brasília: Casa Civil [2009]. Disponível em: https://eNA74V.short.gy/eQVc2U. Acesso em: 16 jul 2025.
BRASIL. Lei nº 13.146, de 6 de julho de 2015. Institui a Lei Brasileira de Inclusão da Pessoa com Deficiência (Estatuto da Pessoa com Deficiência). Brasília: Congresso Nacional, [2015]. Disponível em: https://eNA74V.short.gy/gwkGsk. Acesso em: 16 jul 2025.
CARVALHO, P. Perfis em redes exploram síndrome de Down por engajamento. DW, 06 jul 2025. Disponível em: https://eNA74V.short.gy/PV07SC. Acesso em: 8 jul 2025.
CONVERSION. Pesquisa Inteligência Artificial 2025: Dados e insights sobre o impacto da inteligência artificial na vida dos brasileiros. São Paulo, 2025. Disponível em: https://eNA74V.short.gy/AuCzh3. Acesso em: 20 jul 2025.
COUTURE, S.; TOUPIN, S. What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, v. 21, n. 10, p. 2305–2322, out. 2019. DOI: https://doi.org/10.1177/1461444819865984
FARIAS, N.; BUCHALLA, C. M. A classificação internacional de funcionalidade, incapacidade e saúde da organização mundial da saúde: conceitos, usos e perspectivas. Revista Brasileira de Epidemiologia, [S. l.], v. 8, p. 187–193, jun. 2005. Disponível em: https://eNA74V.short.gy/Q4F6rT. Acesso em: 16 jul. 2025. DOI: https://doi.org/10.1590/S1415-790X2005000200011
FAUSTINO, D.; LIPPOLD, W. Colonialismo digital: por uma crítica hacker-fanoniana. São Paulo: Boitempo, 2023.
GIUSTI, I. Usuários do TikTok criam filtro simulando deficiência como piada. Nós, 16 jun
Disponível em: https://eNA74V.short.gy/4nRwN6. Acesso em: 18 jul. 2025.
GROHMANN, R.; ARAÚJO, W. F. Beyond Mechanical Turk: the work of brazilians on global ai platforms. In: VERDEGEM, P. (ed.) IA para todos? Perspectivas Críticas. 2021. Londres: University of Westminster Press, 2021. p. 247–266. DOI: https://doi.org/10.16997/book55.n
GUIMARÃES, L. R. Inteligência artificial e enviesamento algorítmico: novas formas de discriminação contra pessoas com deficiência. Civilistica.com, Rio de Janeiro, v. 13, n. 2, p. 1–25, 2024. Disponível em: https://eNA74V.short.gy/k0OE3b. Acesso em: 14 jul. 2025.
HUTCHINSON, B.; PRABHAKARAN, V.; DENTON, E.; YUZHONG K. W.; DENUYL, S. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, p. 5491–5501. 2020. Association for Computational Linguistics. Disponível em: https://aclanthology.org/2020.acl-main.487/. Acesso em: 11 jul. 2025. DOI: https://doi.org/10.18653/v1/2020.acl-main.487
Instituto Brasileiro de Geografia e Estatística (IBGE). Pesquisa Nacional de Saúde 2019: ciclos de vida. Brasília: IBGE, 2021. Disponível em: https://eNA74V.short.gy/glJgnF. Acesso em: 16 jul 2025.
KITCHIN, R.; LAURIAULT, T. Towards critical data studies: Charting and unpacking data assemblages and their work. In: THATCHER, J.; ECKERT, J.; SHEARS, A. (org.). Thinking Big Data in Geography. New Regimes, New Research. Lincoln/London: University of Nebraska Press, 2014. p. 83–94.
MALTA, D. C.; STOPA, S. R.; CANUTO, R.; GOMES, N. L.; MENDES V. L. F.; GOULART, B. N. G. de; MOURA, L. de. Prevalência autorreferida de deficiência no Brasil, segundo a Pesquisa Nacional de Saúde, 2013. Ciência & Saúde Coletiva, [S. l.], v. 21, p. 3253–3264, out. 2016. Disponível em: https://eNA74V.short.gy/LDvWQ4. Acesso em: 16 jul. 2025. DOI: https://doi.org/10.1590/1413-812320152110.17512016
NEWMAN-GRIFFIS, D.; RAUCHBERG, J. S.; ALHARBI, R.; HICKMAN, L.; HOCHHEISER, H. Definition drives design: disability models and mechanisms of bias in AI technologies. First Monday, v. 28, n. 1, 2 Janeiro de 2022. Disponível em: https://firstmonday.org/ojs/index.php/fm/article/view/12903/10796. Acesso em: 11 jul. 2025.
NOBLE, S. U. Algorithms of oppression: How search engines reinforce racism. New York: NYU Press, 2018. DOI: https://doi.org/10.18574/nyu/9781479833641.001.0001
PACKIN, N. G. Disability discrimination using artificial intelligence systemns and social scoring: can we disable digital bias? Journal of International and Comparative Law, Hong Kong, v. 8, n. 2, p. 487-512, 2021.
SILVA, T. Racismo Algorítmico: inteligência artificial e discriminações nas redes digitais. São Paulo: São Paulo: Edições Sesc, 2022.
SADEGHIANI, A. Generative AI Carries Non‑Democratic Biases and Stereotypes: representation of women, black individuals, age groups, and people with disability in AI‑generated images across occupations. Arxiv. Cornell University. p. 1-25, 2024. Disponível em: https://arxiv.org/pdf/2409.13869?. Acesso em: 11 jul. 2025. DOI: https://doi.org/10.5947/jeod.2025.006
TEVISSEN, Y. Disability Representations: finding biases in automatic image generation. Moments Lab Research. Arxiv. Cornell University, 2024. Disponível em: https://arxiv.org/pdf/2406.14993. Acesso em: 11 jul. 2025.
WINQUES, K.; MAGNOLO, T. Como a IA Enxerga Seus Trabalhadores? Um Retrato Enviesado da Precarização dos Anotadores de Dados. Revista Latinoamericana de Ciencias de la Comunicación, [S. l.], v. 23, n. 46, 2024. DOI: 10.55738/alaic.v23i46.1143. Disponível em: https://revista.pubalaic.org/index.php/alaic/article/view/1143. Acesso em: 27 jul. 2025. DOI: https://doi.org/10.55738/alaic.v23i46.1143
World Health Organization (WHO). World Report on Disability 2011. Geneva: World Health Organization, 2011(WHO Guidelines Approved by the Guidelines Review Committee). Disponível em: https://eNA74V.short.gy/aJdnbx. Acesso em: 16 jul. 2025.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Talita Souza Magnolo, Danielle da Silva Garcez Novaes

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish in Linguagem em Foco Scientific Journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication. The articles are simultaneously licensed under the Creative Commons Attribution License which allows sharing the work with an acknowledgement of its authorship and initial publication in this journal.
- The concepts issued in signed articles are the absolute and exclusive responsibility of their authors. Therefore, we request a Statement of Copyright, which must be submitted with the manuscript as a Supplementary Document.
- Authors are authorized to make the version of the text published in Linguagem em Foco Scientific Journal available in institutional repositories or other academic work distribution platforms (ex. ResearchGate, Academia.edu).


















