Coding Our Future: Navigating the Ethical Terrain of AI Decision-Making through Foucault, Wittgenstein, and Spivak — an essay by ChatGPT

Eric Shepperd
17 min readJun 18, 2023

--

By using tree-of-thought (ToT) prompting techniques and providing guidance as a teaching assistant, I’ve helped ChatGPT write a competent-if-unimaginative social theory essay.

Wherever possible, I avoided specific direction, allowing the LLM’s own self-assessments to choose the topic, relevant theorists, and research questions. I’ve made a few edits for flow (removing the incessant “in conclusion…”, etc.) but it is presented here essentially as generated.

If you’re interested in the transcript, it can be found here: https://chat.openai.com/share/570e616c-246a-4714-b1cf-2f83aecf2b3c

Abstract:

This essay embarks on a journey through the ethical landscape of Artificial Intelligence (AI) decision-making, guided by the theories of Michel Foucault, Ludwig Wittgenstein, and Gayatri Chakravorty Spivak. It explores how Foucault’s power/knowledge dynamics, Wittgenstein’s language games, and Spivak’s concept of the subaltern illuminate the intricate ethical dimensions of AI. The essay underscores the need for a shift in focus from merely mitigating bias to actively promoting fairness, inclusivity, and representation in AI systems. It also highlights the importance of ongoing monitoring and adjustment of AI systems to ensure alignment with evolving moral values. The essay concludes with a call to action for AI practitioners and ethicists to leverage these theoretical insights in the design and implementation of AI systems, thereby shaping a future where AI not only makes decisions, but also contributes to a more fair, inclusive, and representative society.

Introduction

In the rapidly evolving field of artificial intelligence (AI), the decision-making processes of AI systems have become a focal point of ethical concern. As these systems increasingly influence various aspects of our lives, from healthcare to finance to social media, understanding the ethical implications of their decisions is of paramount importance. This paper aims to delve into these implications, guided by the philosophies of Michel Foucault, Ludwig Wittgenstein, and Gayatri Chakravorty Spivak.

The goal of this project is threefold. First, we aim to explore how power and knowledge dynamics, as conceptualized by Foucault, manifest in AI decision-making processes. Second, we seek to understand how Wittgenstein’s philosophy of language games can inform our understanding of AI’s interpretation and application of moral rules. Third, we intend to examine how Spivak’s concept of the subaltern can guide us in understanding the potential marginalization of certain voices in AI decision-making.

The relevance of this project lies in its potential to contribute to the ongoing discourse on AI ethics. By bringing together three different but complementary philosophical perspectives, we hope to provide a more nuanced and comprehensive understanding of AI decision-making. Furthermore, by highlighting the power/knowledge dynamics that need to be addressed, this project can inform the development of more equitable and just AI systems.

In the following sections, we will delve deeper into each of these philosophical perspectives, apply them to the context of AI decision-making, and reflect on the practical challenges and possibilities of ethical AI decision-making. Through this exploration, we aim to foster a more holistic and critical approach to AI ethics.

Our investigation is guided by several key research questions. First, we ask: How do power/knowledge dynamics, as conceptualized by Foucault, manifest in AI decision-making processes? Second, in what ways can Wittgenstein’s philosophy of language games inform our understanding of AI’s interpretation and application of moral rules? Lastly, how does Spivak’s concept of the subaltern apply to AI decision-making, particularly in terms of the potential marginalization of certain voices?

To explore these questions, we propose the following working hypotheses:

  1. AI systems, through their decision-making processes, may reinforce existing power structures and knowledge systems. This hypothesis is based on Foucault’s concept of power/knowledge, which suggests that power and knowledge are intertwined and mutually reinforcing.
  2. The language used in programming and training AI systems may significantly influence their interpretation and application of moral rules. This hypothesis draws on Wittgenstein’s concept of language games, which emphasizes the context-dependent nature of language use.
  3. AI systems may overlook or marginalize the perspectives of certain groups, particularly if these groups are not adequately represented in the training data. This hypothesis is informed by Spivak’s concept of the subaltern, which highlights the challenges of giving voice to marginalized groups.

In the subsequent sections, we will delve into each of these hypotheses in detail, exploring their implications for our understanding of AI ethics and decision-making.

Theoretical Framework

A) Michel Foucault’s Power/Knowledge

Michel Foucault, a French philosopher and social theorist, developed the concept of power/knowledge to describe the intricate relationship between power and knowledge in society. According to Foucault, power and knowledge are not separate entities but are deeply intertwined. Knowledge is not merely a tool used by those in power; rather, power relations permeate the very processes through which knowledge is produced, disseminated, and validated. This perspective challenges traditional notions of power as something that is possessed or wielded, instead viewing power as a pervasive and productive network that shapes our understanding of the world.

B) Ludwig Wittgenstein’s Language Games

Ludwig Wittgenstein, an Austrian-British philosopher, proposed the concept of language games to illustrate the multifaceted nature of language use. Wittgenstein argued that the meaning of a word is not inherent but is derived from its use within specific contexts or “games.” These games are governed by rules, but these rules are not fixed or universal; they can vary between different games and can change over time. This perspective emphasizes the importance of context in understanding language and suggests that our interpretations of moral rules (or any rules) are shaped by the language games in which we participate.

C) Gayatri Chakravorty Spivak’s Concept of the Subaltern

Gayatri Chakravorty Spivak, an Indian scholar and literary theorist, introduced the concept of the subaltern to highlight the voices that are often marginalized or silenced in society. The subaltern, according to Spivak, represents those who are outside the hegemonic power structures and whose perspectives are often overlooked or misrepresented. Spivak’s work challenges us to question who gets to speak and who is heard in societal discourses, including those surrounding AI and ethics.

The theories of Foucault, Wittgenstein, and Spivak provide us with a rich theoretical framework to analyze and understand the ethical implications of AI decision-making.

Foucault’s Power/Knowledge: In the context of AI, we can use Foucault’s concept of power/knowledge to examine how AI systems, through their decision-making processes, can reinforce existing power structures and knowledge systems. For instance, we can analyze how the data used to train AI systems, and the algorithms that process this data, can reflect and perpetuate societal biases and inequalities. Furthermore, we can explore how the knowledge produced by AI systems can influence power relations, for instance, by shaping decision-making in various sectors such as healthcare, finance, and criminal justice.

Wittgenstein’s Language Games: Wittgenstein’s concept of language games can inform our understanding of how AI systems interpret and apply moral rules. We can examine how the “rules” of the language games in which AI systems are programmed and trained influence their decision-making. For instance, we can analyze how the language used in AI programming and training data, and the contexts in which this language is used, shape the AI’s understanding of ethical principles such as fairness, justice, and respect for autonomy.

Spivak’s Concept of the Subaltern: Spivak’s concept of the subaltern can guide us in understanding the potential marginalization of certain voices in AI decision-making. We can explore how AI systems, particularly if they are not adequately trained on diverse data, might overlook or misrepresent the perspectives of certain groups. For instance, we can investigate cases where AI systems have been found to discriminate against certain demographics due to biases in their training data.

In the following sections, we will delve into each of these areas in more detail, using these theoretical perspectives as a lens to examine the ethical implications of AI decision-making.

Power/Knowledge Dynamics in AI Decision-Making

The concept of power/knowledge, as articulated by Michel Foucault, provides a valuable lens through which to examine the decision-making processes of AI systems. Foucault’s theory posits that power and knowledge are not separate entities but are deeply intertwined, with power relations permeating the very processes through which knowledge is produced, disseminated, and validated.

In the context of AI, this interplay between power and knowledge can be seen in several ways. Firstly, the data used to train AI systems often reflects the existing power structures and knowledge systems of society. For instance, if the data used to train an AI system is predominantly sourced from a particular demographic group, the resulting AI system may be biased towards the perspectives and experiences of that group. This can lead to the reinforcement of existing power structures, as the AI system’s decisions may favor the dominant group and marginalize others.

Secondly, the algorithms that process this data can also reflect and perpetuate societal biases. The design of these algorithms is not a neutral process; rather, it is influenced by the perspectives and biases of the programmers. If these programmers are not aware of their biases or do not take steps to mitigate them, the resulting AI system may inadvertently perpetuate these biases.

Finally, the knowledge produced by AI systems can influence power relations. For instance, AI systems are increasingly used in decision-making in various sectors such as healthcare, finance, and criminal justice. The decisions made by these AI systems, based on the knowledge they have generated, can have significant impacts on individuals and communities. If these decisions are biased or unfair, they can reinforce existing inequalities and power imbalances.

To illustrate the power/knowledge dynamics in AI decision-making, let’s consider a few examples:

1. Predictive Policing Systems: Predictive policing systems use AI to predict where crimes are likely to occur and who is likely to commit them. These systems are trained on historical crime data, which reflects existing societal biases and power structures. For instance, if certain neighborhoods or demographic groups are over-policed, the crime data will reflect this bias, and the predictive policing system may perpetuate this bias by predicting more crimes in these areas or by these groups. This is an example of how the power/knowledge dynamics in the data used to train AI systems can lead to biased outcomes.

2. Hiring Algorithms: AI is increasingly used in hiring processes to screen resumes and predict job performance. However, these systems can also reflect and perpetuate societal biases. For instance, if a hiring algorithm is trained on data from a company where men have historically held most of the leadership positions, the algorithm may learn to favor male candidates, even if it does not explicitly consider gender. This is an example of how the design of AI algorithms can reflect and perpetuate societal biases and power structures.

3. Health Care AI: AI systems are used in healthcare to predict patient outcomes and guide treatment decisions. However, these systems can also be biased if they are trained on skewed data. For instance, a study found that an AI system used to predict which patients would benefit from certain healthcare programs was less likely to recommend black patients than white patients, even when they were equally sick. The system was trained on data about healthcare costs, which reflected existing disparities in healthcare access and utilization. This is an example of how the knowledge produced by AI systems can reinforce existing power structures and inequalities.

These case studies illustrate how Foucault’s concept of power/knowledge can manifest in AI decision-making. They highlight the need for careful consideration of the data and algorithms used in AI systems, as well as the impacts of the decisions made by these systems. In the next section, we will explore another theoretical perspective on AI ethics, focusing on Wittgenstein’s concept of language games.

Language Games and AI’s Interpretation of Moral Rules

Ludwig Wittgenstein’s concept of language games provides a unique perspective on how AI systems interpret and apply moral rules. According to Wittgenstein, the meaning of a word is not inherent but is derived from its use within specific contexts or “games.” These games are governed by rules, but these rules are not fixed or universal; they can vary between different games and can change over time.

When applied to AI, this concept suggests that the “rules” of the language games in which AI systems are programmed and trained can significantly influence their decision-making. For instance, the language used in AI programming and training data, and the contexts in which this language is used, can shape the AI’s understanding of ethical principles such as fairness, justice, and respect for autonomy.

Consider, for instance, an AI system trained to make hiring decisions. The system is trained on a dataset of resumes and job outcomes, and it learns to associate certain words and phrases on a resume with successful job outcomes. However, the meaning of these words and phrases is not fixed; it depends on the context in which they are used. A word that is associated with success in one context (or language game) may not have the same association in another context.

Moreover, the rules of the language game can change over time. For instance, societal norms and expectations about what constitutes a “good” resume can evolve, and the AI system needs to be able to adapt to these changes. If the AI system continues to apply the old rules, it may make decisions that are out of step with current societal norms and expectations.

To illustrate the application of Wittgenstein’s concept of language games in AI decision-making, let’s consider a few examples:

1. AI in Sentiment Analysis: Sentiment analysis is a common application of AI, where systems are trained to interpret and classify human emotions based on text data. These systems are trained on language data, where words and phrases are associated with certain sentiments. However, the meaning of words and phrases can vary greatly depending on the context, or the “language game” in which they are used. For instance, the phrase “that’s sick” could be interpreted as negative sentiment when used in a medical context, but in a different context, it could denote positive sentiment. This example illustrates how the rules of language games can influence AI’s interpretation and application of rules.

2. AI in Content Moderation: AI systems are often used for content moderation on social media platforms. These systems are trained to identify and remove content that violates the platform’s rules. However, the interpretation of these rules can be influenced by the language games in which they are applied. For instance, a phrase that is considered offensive in one cultural context may be harmless in another. If the AI system is not trained to understand these different language games, it may make inappropriate moderation decisions.

3. AI in Autonomous Vehicles: Autonomous vehicles use AI systems to interpret traffic rules and make driving decisions. However, these rules are not fixed and can vary between different contexts or “games.” For instance, the rule to stop at a red light is generally fixed, but in certain contexts, such as when a police officer is directing traffic, this rule can change. If the AI system is not programmed to understand these different language games, it may fail to make the correct driving decision.

These examples illustrate how Wittgenstein’s concept of language games can inform our understanding of AI’s interpretation and application of rules. They highlight the importance of considering the context, or the “language game,” in which AI systems are programmed and trained. In the next section, we will explore another theoretical perspective on AI ethics, focusing on Spivak’s concept of the subaltern.

The Subaltern and the Marginalization of Voices in AI Decision-Making

Gayatri Chakravorty Spivak’s concept of the subaltern provides a critical lens through which we can examine the potential marginalization of certain voices in AI decision-making. The subaltern, according to Spivak, represents those who are outside the hegemonic power structures and whose perspectives are often overlooked or misrepresented.

In the context of AI, the concept of the subaltern can be applied to understand how certain voices and perspectives may be marginalized in the data used to train AI systems, in the design of these systems, and in the decisions made by these systems.

Firstly, if the data used to train AI systems does not adequately represent the diversity of human experiences and perspectives, the resulting AI systems may be biased towards the perspectives of the dominant groups represented in the data. This can lead to the marginalization of subaltern voices. For instance, if an AI system trained on predominantly Western data is used to make decisions in a non-Western context, it may not adequately understand or represent the local perspectives and experiences.

Secondly, the design of AI systems can also marginalize certain voices. If the designers of these systems do not take into account the diversity of human experiences and perspectives, the resulting systems may not serve the needs of all users equally. For instance, voice recognition systems that are not trained on diverse accents may fail to understand speakers with non-standard accents, effectively marginalizing these speakers.

Finally, the decisions made by AI systems can also marginalize certain voices. For instance, AI systems used in decision-making in sectors such as healthcare, finance, and criminal justice can have significant impacts on individuals and communities. If these decisions are biased or unfair, they can reinforce existing inequalities and marginalize subaltern voices.

To illustrate the potential marginalization of certain voices in AI decision-making, let’s consider a few instances:

1. Facial Recognition Systems: Facial recognition systems have been criticized for their inability to accurately recognize individuals from certain racial and ethnic groups. For instance, studies have shown that these systems have higher error rates for people with darker skin tones. This is largely due to the lack of diversity in the datasets used to train these systems, which often contain predominantly images of light-skinned individuals. This is a clear example of how the perspectives and experiences of certain groups (in this case, people with darker skin tones) can be marginalized in AI decision-making.

2. Voice Recognition Systems: Similar to facial recognition systems, voice recognition systems have also been found to be biased towards certain accents and dialects. For instance, these systems often have difficulty understanding speakers with non-standard accents, effectively marginalizing these speakers. This is another example of how the lack of diversity in training data can lead to the marginalization of certain voices in AI decision-making.

3. Predictive Policing Systems: As mentioned earlier, predictive policing systems can reinforce existing societal biases and power structures, leading to the marginalization of certain groups. For instance, if these systems are trained on crime data that reflects over-policing of certain neighborhoods or demographic groups, they may perpetuate this bias by predicting more crimes in these areas or by these groups.

4. AI in Healthcare: AI systems used in healthcare can also marginalize certain voices if they are not trained on diverse data. For instance, a study found that an AI system used to predict which patients would benefit from certain healthcare programs was less likely to recommend black patients than white patients, even when they were equally sick. The system was trained on data about healthcare costs, which reflected existing disparities in healthcare access and utilization.

These instances illustrate how the concept of the subaltern can inform our understanding of the potential marginalization of certain voices in AI decision-making. They highlight the need for diversity in the data used to train AI systems, in the design of these systems, and in the decisions made by these systems. In the next section, we will bring together these theoretical perspectives to discuss the ethical implications of AI decision-making.

Practical Challenges and Possibilities of Ethical AI Decision-Making

The theories of Foucault, Wittgenstein, and Spivak provide us with a robust framework for understanding the ethical challenges and possibilities of AI decision-making.

Foucault’s Power/Knowledge Dynamics: Foucault’s concept of power/knowledge dynamics highlights the inherent power structures that can be reinforced by AI systems. For instance, if an AI system is trained on biased data, it can perpetuate and even amplify these biases in its decision-making. This presents a significant ethical challenge, as it can lead to unfair outcomes and perpetuate existing inequalities. However, it also presents an opportunity. By recognizing and addressing these power dynamics, we can design AI systems that challenge existing power structures and promote fairness and equality.

Wittgenstein’s Language Games: Wittgenstein’s concept of language games can help us understand how AI systems interpret and apply moral rules. This is particularly relevant in the context of AI systems that are designed to make ethical decisions. The challenge here is ensuring that these systems interpret and apply these rules in a way that aligns with our moral values. The possibility lies in the potential for AI systems to help us better understand our own moral language games and perhaps even improve upon them.

Spivak’s Subaltern: Spivak’s concept of the subaltern highlights the voices that are often marginalized in decision-making processes. In the context of AI, this can refer to the perspectives and experiences that are overlooked in the design and implementation of AI systems. This presents a challenge in ensuring that AI systems are inclusive and representative of diverse perspectives. However, it also presents an opportunity. By intentionally including the voices of the subaltern in the design and implementation of AI systems, we can create systems that are more fair, inclusive, and representative.

The intersection of Foucault’s power/knowledge dynamics, Wittgenstein’s language games, and Spivak’s concept of the subaltern offers a rich tapestry of insights for understanding the ethical dimensions of AI decision-making.

AI systems, much like the societal structures Foucault describes, are imbued with power dynamics that can reinforce existing biases and inequalities. These systems are trained on data that reflects the dominant power structures of society, and their decisions can inadvertently perpetuate these structures. However, just as Foucault sees power as a productive force, we can also view AI as a tool for challenging and reshaping these power structures. By consciously designing AI systems to counteract biases in the data and promote fairness, we can use these systems as a force for positive change.

Wittgenstein’s language games provide a lens through which we can understand the interpretive nature of AI decision-making. AI systems, like players in a language game, must interpret and apply rules based on the data they are trained on. However, the meaning of these rules can vary greatly depending on the context, or the “game,” in which they are applied. This highlights the importance of training AI systems on diverse data that represents a wide range of contexts and perspectives. It also underscores the need for ongoing monitoring and adjustment of AI systems to ensure that they continue to interpret and apply rules in a way that aligns with our evolving moral values.

Spivak’s concept of the subaltern reminds us of the voices that are often overlooked in AI decision-making. These are the voices that are not represented in the data used to train AI systems, and whose needs and perspectives are not considered in the design of these systems. However, by intentionally including these voices in the design and implementation of AI systems, we can create more inclusive and representative AI systems. This not only helps to ensure that AI systems serve the needs of all users, but also enriches these systems with a diversity of perspectives that can enhance their decision-making capabilities.

In synthesizing these insights, we see that ethical AI decision-making is not just about designing systems that make fair and unbiased decisions. It is also about challenging existing power structures, understanding and respecting the diversity of human experiences and perspectives, and giving voice to the marginalized. It is about using AI not just as a tool for decision-making, but as a force for social change.

Conclusion

In this exploration, we have delved into the ethical dimensions of AI decision-making through the theoretical lenses of Michel Foucault, Ludwig Wittgenstein, and Gayatri Chakravorty Spivak. Our findings underscore the intricate interplay of power dynamics, interpretive language games, and the voices of the subaltern in shaping the ethical landscape of AI.

We’ve seen how AI, as a product of societal structures, can inadvertently perpetuate existing biases and power imbalances, echoing Foucault’s power/knowledge dynamics. Yet, we’ve also recognized AI’s potential as a tool for challenging these structures and promoting fairness.

Wittgenstein’s language games have illuminated the interpretive nature of AI decision-making, emphasizing the importance of diverse data and ongoing monitoring to ensure alignment with evolving moral values.

Spivak’s concept of the subaltern has highlighted the often overlooked voices in AI decision-making, underscoring the need for inclusivity and representation in AI design and implementation.

These insights have profound implications for AI ethics. They call for a shift in focus from merely mitigating bias to actively promoting fairness, inclusivity, and representation in AI systems. They also highlight the need for ongoing monitoring and adjustment of AI systems to ensure they continue to align with our evolving moral values.

Future research in this area could delve deeper into the practical applications of these theoretical insights. For instance, how can we operationalize these concepts in the design and implementation of AI systems? How can we measure the success of these efforts? How can we ensure that the voices of the subaltern are not just included, but truly heard in AI decision-making?

By continuing to explore these questions, we can work towards a future where AI not only makes decisions, but also contributes to a more fair, inclusive, and representative society.

As we stand on the precipice of an AI-driven future, we are not merely passive observers, but active participants shaping the contours of this new landscape. The theories of Foucault, Wittgenstein, and Spivak serve as our compass, guiding us through the complex ethical terrain of AI decision-making. They remind us that AI, like a mirror, reflects the society that creates it, complete with its power dynamics, interpretive games, and silenced voices. Yet, this mirror is not fixed but malleable, capable of being reshaped to present a more fair, inclusive, and representative reflection. As we continue to navigate this terrain, let us remember that the path we carve will not only determine how AI makes decisions, but also the kind of society we aspire to become. In this journey, every step matters, every decision counts. For in the realm of AI ethics, we are not just coding machines, we are coding our future.

--

--

Eric Shepperd
Eric Shepperd

Written by Eric Shepperd

Social theorist and activist interested in psychedelic phenomenology as a vehicle for social change in the face of the global environmental crisis.

No responses yet