Round Table India
You Are Reading
Unmasking the Myth of “Neutral” Tech
16
pranav jeevan p

Pranav Jeevan P

There is a popular myth that technology is neutral, but in reality, technology reinforces the hierarchical and exploitative power relations of the societies that create and use it. Throughout history, technological advancements have been deeply intertwined with acts of genocide, apartheid, and the exploitation of people. From the colonial era, where technological tools facilitated conquest and control, to the industrial age, which saw the exploitation of workers and resources, technology has consistently played a crucial role in maintaining and amplifying social hierarchies. Even today, modern technologies like surveillance systems and data analytics are used to monitor and control marginalized communities, further entrenching existing power imbalances.

Technology is not neutral because it is created, used, and controlled by humans, who have their own biases and goals. Even though technology has led to some improvements in living conditions, it has predominantly been used toconsolidate and reinforce existing power structures. This brings it at odds withsocial justice, which aims to redistribute power to counter past and ongoing oppression of communities. Despite Silicon Valley founders continuing to portray technology as a liberatory force that could decentralize power and communication, it has become evident that technology mostly serves to centralize power. Nevertheless, the narrative of technology as a liberatory force remains pervasive, masking the reality of its role in reinforcing existing power inequalities.

Many technologies, especially algorithms and artificial intelligence (AI), can carry algorithmic biases from their creators and from the datasets they were trained on. Algorithmic bias refers to the systematic and unfair discrimination embedded in computer algorithms, often reflecting the prejudices and inequalities present in the data used to create them. This bias disproportionately affects historically marginalized communities by perpetuating and even exacerbating existing prejudices. For instance, facial recognition technology has been shown to have higher error rates for people of color, leading to false identifications and increased surveillance of these communities. Additionally, algorithms used in hiring processes can disadvantage candidates from underrepresented backgrounds if they are trained on biased data that reflects historical hiring inequities. Another example is predictive policing algorithms, which often target minority neighborhoods based on biased crime data, resulting in over-policing and further marginalization of these communities. In 2020, researchers from Stanford and Georgetown published a study revealing significant racial disparities in commercial automated speech recognition (ASR) systems developed by Amazon, Apple, Google, IBM, and Microsoft, particularly in transcribing Black people’s voices. Similarly, a 2023 ACM paper found that large language models (LLMs) are more likely to assign occupations based on gender stereotypes, further demonstrating that biased data can lead to harmful outcomes in AI technologies. These findings underscore the critical need to address biases in AI to prevent perpetuating social inequities.

We are currently witnessing an AI-assisted genocide of Palestinians in Gaza by Israel. AI-driven technologies are being used to select and target Palestinians in real-time, facilitating acts of violence and oppression. Semi-autonomous robot dogs are deployed for military operations, and drones are being used to shoot unarmed civilians at close range. Since October 7, over 37,900 Palestinians werekilled by Israeli forces, majority of them women and children, with AI playing a significant role in this violence. These technologies are being used to perpetrate human rights abuses and war crimes, illustrating AI’s potential when controlled by those prioritizing power and conquest over human life and dignity.

Even the term “Artificial Intelligence” lacks coherency and is largely a marketing tool pushed by big tech companies. It is used to describe a wide variety of statistical methods and computational techniques clumped together under a single, alluring label. This branding is designed to sell the concept of AI as possessing intelligence and human-like qualities, thereby anthropomorphizing what are essentially algorithms and data processing tools. By promoting AI as an independent, intelligent entity, these companies effectively deflect accountability, positioning the technology as the decision-maker while distancing themselves from the consequences of its actions. This strategy allows them to evade responsibility for the biases, errors, and ethical concerns arising from AI applications, shifting the blame onto an artificial construct rather than addressing the fundamental issues within their systems and practices. When Air Canada’s AI chatbot gave incorrect information to a traveler, the airline argued its chatbot is “responsible for its own actions”. Hence, the real product the AI industry sells is the abdication of responsibility or the ability to not be accountable to the people who areadversely impacted by this technology.

Technologies are also used for surveillance and control, impacting privacy and freedom of individuals. Technologies such as the internet, social media, and big data analytics have been used by governments and corporations to surveil and control populations, as seen with mass surveillance programs and data collection practices. This surveillance often targets marginalized groups, reinforcing social hierarchies and power imbalances.The history of anthropometry and fingerprinting in India followed the millennia-old Brahminical codes of casted criminality. Fingerprinting, developed by Sir William Herschel,a British colonial officer, was later institutionalized and expanded by others, such as Edward Henry, who developed the Henry Classification System for fingerprints.The implementation of these identification techniques was deeply intertwined with the British colonial administration’s efforts to control and monitor the population.One of the most adverse impacts was on the so-called “criminal tribes,” a group of communities that the British colonial government categorized as inherently criminal under the Criminal Tribes Act of 1870s. This legislation allowed the government to register, surveil, and restrict the movement of these communities, effectively criminalizing their existence based on their ethnicity and traditional lifestyles.As a result, these identification practices contributed to a legacy of stigma and discrimination that persist even today, decades after the colonial period.

The field of computer science, both in academia and industry, often hides behind a veneer of neutrality, prioritizing quantitative knowledge and technical prowess while dismissing the importance of qualitative insights and diverse lived experiences. This narrow focus enables the field to evade accountability for its broader social implications, perpetuating exploitative power relations and reinforcing existing hierarchies.This approach helps in maintaining the status quo of who has access to and control over technology. For example, the underrepresentation of marginalized castes, Adivasis, women and minorities in tech leads to a lack of diverse perspectives in designing and implementing technological solutions, which can result in biased and inequitable outcomes. Data collected by APPSC IIT Bombay shows that out of 26 PhD candidates selected in department of computer science of IIT Bombay in 2023, 21 (81%) were savarnas, with just 3 SC (11%), 1ST (4%) and 1 OBC (1%). IIT Delhi has 37 faculties in computer science department out of which 36 (97%) are savarnas. Similar trend of gatekeeping the field of computing from Dalit Bahujan Adivasis (DBA) by violating reservation norms can be observed in all higher education institutions across the country.

The attempts by the Indian government to introduce digital interventions in MNREGA, including biometric-based payment, attendance and worker verification systems, and worksite surveillance, have had an adverse impact on rural workers and their constitutionally guaranteed rights to equality, empowerment, fair wages, dignity, and privacy. These measures have led to large-scale disempowerment and exclusion of workers from NREGA. The root of these issues lies in the policymakers’ failure to consult with MNREGA workers on how to effectively use digital tools to help them realize their rights under the MNREGA Act. The situation could have been significantly different if there had been public consultations and stakeholder participation at every level of implementing digital interventions. The top-down techno-solutionism being thrust into the system without any deliberation with workershas resulted in a disaster.

Those who create these policies and build these digital interventions often have no understanding of the lived realities of the MNREGA workers. The lack of participation from the affected people in the process of building, analyzing, and controlling these technologies is a critical gap. Without their involvement, it is nearly impossible to implement these technologies successfully and ensure they truly benefit those they are meant to help. Public consultations and stakeholder participation are essential for aligning technological solutions with the needs and rights of the workers.

Forcefully imposing digital solutions or technological interventions, particularly in the realm of social security or welfare delivery, without addressing the digital divide and low internet literacy in a country like India, has led to the exclusion and violation of fundamental rights for many individuals. This policy approach neglects the realities of those who lack access to digital resources or the skills to use them, thereby exacerbating existing inequalities and disenfranchising vulnerable populations. By not considering these critical factors, such interventions can inadvertently harm the very people they are intended to help.

Industry’s relentless pursuit of profit over social responsibility greatly exacerbates these issues. Many technologistswrongly believe that their efforts are improving the world. However, the reality is that much of today’s technology is controlled by a few major companies owned by a small number of super-wealthy individuals. These tech giants prioritize profit over social good, often benefiting themselves at the expense of others. Their dominance allows them to exploit data, manipulate markets, and engage in practices that harm consumers, workers, and communities. This concentration of power not only stifles innovation but also perpetuates oppression and dismantles democracy.

The harmful applications of technology in military and industrial contexts are not merely unintended consequences of the digital revolution, but are central to the objectives of this technological advancement. Technology developed within oppressive structures will only leans towards violence and power concentration rather than promoting equality and liberation.

Within the tech industry itself, there is a clear perpetuation of caste and apartheid, evident in the stark separation between contract/gig workers who are mostly from underprivileged communities and full-time employees who build these algorithms coming from mostly privileged savarna backgrounds. This creation of a privileged class of tech workers who are distanced from other workers and communities hinders the formation of solidarity among workers, preventing a unified stand against practices that violate human rights and the dignity of labor. The creation of a privileged class serves to protect the interests of those in power, ensuring that technological advancements continue to benefit a select few while exploiting and marginalizing the majority. This structural violence embedded within the development and application of technology underscores the need for a critical reassessment of how technology is created and used in society.

In a country where more than 54% of the 1 million delivery workers belong to either SC/ST, the imposition of a vegetarian fleet by companies like Zomato once again underscores the neglect of deep-rooted social inequalities by technologists. This decision not only reinforces Brahminical notions of food purity but also marginalizes the communities that do not adhere to these dietary practices. Such policies highlight the disconnect between corporate decisions and the lived realities of the workers, perpetuating existing social divides and biases.

The larger threat posed by AI is not the creation of a super-intelligent being that might rule over humanity. Instead, the real immediate danger lies in how these technologies can reinforce existing power structures, making it even more difficult to democratize society and dismantle tyranny. The focus of critiques should therefore be on how these technologies are used to entrench inequality and control.

There is a fanatical belief in the notion that technology is the sole savior of humanity, capable of solving all our problems. This techno-utopianism blinds us to the reality that technology, especially as it is currently developed and deployed, exacerbates existing inequalities, and creates new forms of exploitation. While technology has the potential to bring about positive change, viewing it as the only means to save humanity ignores the need for ethical considerations, social justice, and the redistribution of power. It is crucial to critically assess who controls technological advancements and to ensure that they are used to benefit all of humanity, not just a privileged few.

To address these issues, efforts must be directed toward democratizing the tech industry, ensuring that people from marginalized communities have greater access to the processes by which algorithms are created, controlled, and deployed.  Additionally, there must be concerted efforts to dismantle the oligarchy of tech companies and the influence of a few mega-billionaires who currently dominate the development and use of these technologies.

The framing of technology as a neutral and objective is not just misleading; it is dangerous. It allows institutions to escape scrutiny and reinforces the power of those already in control. Addressing these deep-rooted issues requires a fundamental shift towards valuing qualitative experiences, ethical considerations, and genuine diversity in the development and application of technological solutions.

~~~

Pranav Jeevan P is a PhD candidate in Artificial Intelligence at IIT Bombay. He has earlier studied quantum computing in IIT Madras and Robotics at IIT Kanpur

Leave a Reply