Shattering Illusions
Shattering Illusions Podcast
Being a Neighbor in the Age of AI: The Parable of the Good Samaritan
0:00
-14:03

Being a Neighbor in the Age of AI: The Parable of the Good Samaritan

Moral Foundations of AI (Part 2)

1. The Good Samaritan Parable and AI Ethics

2. Three Key Moral Lessons

  • Knowledge vs. Action: knowing the right thing isn't enough—one must also act ethically.

  • Self-Justification: people often try to limit the extent of their moral duties by narrowing who counts as their "neighbor." They may be OK with loving certain people but not others.

  • True Neighborliness: being a neighbor is about compassion as opposed to social status. In the parable, the outcast Samaritan is the true neighbor because he had mercy on his fellow human being while the priest and the Levite pass by the injured man without helping.

3. Practical Applications to AI

  • Expanded Sense of Compassion: we should expand the object of our love to all people, without limiting ourselves only to certain people or groups.

  • Helping Those We Can: while we should love all people equally, we cannot help everyone, and we have special duties to some (e.g., parents, children, spouses). So what can we do? Following the advice of Augustine of Hippo (commonly known as Saint Augustine)1, we should especially seek to help those people to whom we find ourselves connected by the circumstances of life. It’s like a lottery - the people in your life at any given time are there kind of randomly, and they come and go out of your life. We should make sure we help these people, whom we are actually able to help.

  • Human Touch in Tragedy: algorithms should not replace human empathy in moments of personal crisis. A real-life example is Dorothy Pomerantz’s experience of receiving a cancer risk diagnosis from 23andMe at home without human support. While she was ultimately happy she got the diagnosis, it was very stressful for her and she wished that someone could be there to help her understand the diagnosis if it’s tragic (in her case, life-threatening).

  • Appealable AI Decisions: People need easy access to human review when AI systems make significant decisions (e.g., in healthcare, lending, or legal decisions). This is due to the limitations of AI in handling novel ("out of distribution") scenarios.2 On the other hand, humans know when rules don’t apply and exceptions need to be made.

4. What You Can Do

  • Broaden your love to all.

  • Help those you’re connected to by circumstance.

  • Make sure that there is a human touch in situations where AI is used in tragic situations (e.g., a potentially fatal medical diagnosis).

  • Ensure that AI decisions that significantly affect people’s lives can be easily appealed to humans.

Thanks for reading Shattering Illusions! Subscribe for free to receive new posts and support my work.

This post is public so feel free to share it.

Share

1

Augustine. “How We Are to Decide Whom To Aid.” Chapter 28, Book 1, On Christian Doctrine. https://faculty.georgetown.edu/jod/augustine/ddc1.html

2

See AI Research Group of the Centre for Digital Culture. 2023. “Encountering Artificial Intelligence: Ethical and Anthropological Investigations.” Journal of Moral Theology 1 (Theological Investigations of AI): i–262. https:/​/​doi.org/​10.55476/​001c.91230. pp. 241-242.

Discussion about this episode

User's avatar