Round Table India
You Are Reading
Caste and AI
21

Caste-Based Discrimination in AI Algorithms: Necessity of Leading Science, Not Just sharing Narratives!

Sagar Kamble

When there is absence of well-structured and ethically guided approaches in technology, it is inevitable that such advancements will have adverse effects on society. Artificial Intelligence and machine learning, as the epitome of technological progress, have the potential to become tools of power wielded by a select group of elites, if not developed with non-discriminatory principles. Similarly, just as the fundamental design of capitalism centers around accumulating and monopolizing wealth and resources in the hands of a few, AI may follow a similar path if not approached with fairness and equitable considerations.

Racial bias in AI applications has emerged as a major issue, and the biased information used to train AI algorithms is the source of such prejudice.[i] Similar bias risks are likely to occur in India due to the usage of biased data in upcoming AI systems. Scholars and experts have spoken out in support of anti-caste tech policies to eradicate caste-based discrimination in new technology.[ii][iii]Reducing and eliminating caste-based discrimination in AI algorithms is needed to ensure fairness, equity, and equal opportunity for Dalits and lower castes who face systemic oppression based on their caste identity.

The issue of caste discrimination holds particular importance in India, where Dalits and lower castes face marginalization and oppression entrenched in societal structures. As AI algorithms gain popularity and influence in various domains, questions have been raised about the potential persistence or worsening of existing prejudices.

Given the global reach and speed of AI technology, it is critical to consider the specific issues that Dalits face due to AI algorithmic prejudice and work towards developing effective solutions to mitigate these prejudices. It is a decision-relevant subject since it has immediate ramifications for AI system development, implementation, and use. More significantly, Dalits who have experienced caste-based prejudice are the best people to teach AI developers about the nature of discrimination.

Sourcing the Caste-Bias in Algorithms

Brahmin-savarna domination is prevalent in every sphere of life in India from culture (psychology), academia to economy, politics and similarly in technology. Powerful groups are more likely to take advantage of AI technology, creating underrepresentation and designing algorithms that benefit them. Some important politicians in India who are descended from the established classes are using AI-based tools to capitalize on their current digital footprints. Although this tactic is not unethical, it portrays AI technology as the enemy since it impedes social justice.

Even if this is not the desired outcome, AI technology should include some algorithmic fairness to avoid it from benefiting the weak and disenfranchised. As a historically oppressed social group, the uncovering of the phenomenon of caste-based discrimination of Dalits via research-based authentic exploration and further intersectional analysis with preexisting patterns could inform AI developers about the nature of caste-bias and its programming in AI systems. There are caste-based biases affecting AI technology as a result of the usage of caste-biased data sets for AI research. In a democratic political system, it is not expected that technology will operate as an oppressor. In the Indian context, the function of AI can be harmful since caste systems unfairly cause Dalits and lower castes to suffer, making it impossible for the AI community to handle the complexity. Because of the early-identified caste prejudice, marginalized people in the Indian subcontinent are still waiting for AI to be placed with greater attention. The AI community should advocate for a more transparent and inclusive AI ecosystem.

Advancing Ethical AI Governance

The problem of AI caste bias deserves greater public attention since it directly affects the lives of underprivileged and digitally excluded people, particularly Dalits. To better understand caste-based prejudice, gather a diverse range of thoughts, viewpoints, and experiences if Dalits need to be purposefully included in such investigation. Participation of impacted communities may support the development of community-driven solutions, allowing Dalits to actively shape AI breakthroughs that negatively affect their quality of life. The proposed technique can aid in the development of more moral and inclusive AI systems that reduce caste-based prejudice and increase social justice by increasing knowledge, empathy, and accountability within the AI community. However, considering the systemic institutionalization of bias, it will require structured efforts in governance.[iv]

While feedback from laboratories, developers, and other stakeholders may result in a lot of breakthroughs, a thorough structure of inclusion and authority granted to Dalit representatives in this process is still required. The first advantage is that it can enhance awareness and knowledge of the problems that Dalits have historically faced, as well as those caused by algorithmic biases in AI. As a result of this better comprehension and empathy, the AI community may be more committed to removing such stereotypes. Developers may benefit from being aware of the actual circumstances of underrepresented communities in order to detect any biases in their algorithms and systems. Such a means of tackling caste-based discrimination will make dealing with racial biases in AI easier. Because they provide a unique and nuanced perspective on the actual experiences of those encountering prejudice, qualitative perspectives are critical to the process of addressing caste bias in AI.[v]

Qualitative Perspective with Science Leadership of Dalits

We can acquire vital insights into the complexity, emotions, and real-world effect of caste-based bias by digging into qualitative data obtained through interviews and narratives. These personal narratives provide context and depth, helping us to identify the underlying elements causing caste bias in AI systems and develop more effective strategies to address and correct this societal issue. By researching Dalit discrimination narratives and patterns, as well as the impact of algorithmic systems, it is possible to highlight the specific challenges and prejudices present in AI algorithms. It can expose the biases ingrained in the data gathered, the algorithms used, and the decision-making processes that support caste-based discrimination. On the basis of an understanding of these processes, strategies and recommendations can be devised to reduce and eliminate caste-based bias in AI. Because of institutional inequalities, nepotism, and overrepresentation of certain demographics (savarna), it is critical to cautiously adopt AI technology. The dominance of caste-biased communities in the creation of AI technology is likely to perpetuate the AI’s socially discriminatory practices. A method like this undermines democratic decision-making. Fair and ethical implementation of AI technology is essential to ensure that it improves existing democratic governance.

In line with multimodal approaches such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection, a well designed, qualitative approach recognizes the process of interpretation as well as the subjective aspect of human experiences.[vi] Bahujans (Dalits and lower caste community members) who speak the local vernacular and rural languages will have a difficult time participating. Participants in the qualitative data collection must be chosen from among individuals who wish to share their experiences. This may have an impact on the sample’s representativeness and limit the range of perspectives that can be reflected. Such analysis may contain bias and subjectivity due to differences in how different researchers evaluate participant responses and identify themes and patterns. While this social science-based technique addresses an important aspect of AI governance, teaching the AI community about caste-based prejudices and the effects they have on disadvantaged and oppressed groups is a constitutional requirement. The caste bias in AI algorithms can be erased over time with the adoption of more advanced methodologies, but Bahujans, along with the efforts for social justice, should lead the ways of science and technology.

[I am thankful to Dr Govind Dhaske for valuable inputs during the preparation of this article. And thankful to Milind Thokal for reviewing]

~

Notes

[i]  Kostick-Quenet, K. M., Cohen, I. G., Gerke, S., Lo, B., Antaki, J., Movahedi, F., … & Blumenthal-Barby, J. S. (2022). Mitigating racial bias in machine learning. Journal of Law, Medicine & Ethics, 50(1), 92-100.

[ii]Sood, Y. (2022). Addressing Algorithmic Bias in India: Ethical Implications and Pitfalls. Available at SSRN 4466681.

[iii]https://indianexpress.com/article/opinion/columns/a-call-for-algorithmic-justice-for-sc-sts-8607880/

[iv]Barocas, S., Selbst, A. D., & Barocas, S. (2021). The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms. Science, Technology, & Human Values, 46(2), 229-257.

[v]Marda, V. (2018). Artificial intelligence policy in India: a framework for engaging the limits of data-driven decision-making. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180087.

[vi]Aquino, Y. S. J., Carter, S. M., Houssami, N., Braunack-Mayer, A., Win, K. T., Degeling, C., Wang, L., & Rogers, W. A. (2023). Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: A qualitative study of multidisciplinary expert perspectives. Journal of Medical Ethics, 49(3), 190-197.

~~~

Sagar Kamble is a PhD Scholar in Political Theory at University of Mumbai.

Leave a Reply