Open any major AI ethics framework produced since 2016, the EU's Ethics Guidelines for Trustworthy AI, the OECD Principles, the major tech company statements, and you will find the same constellation of values: individual autonomy, fairness, transparency, privacy, and accountability. These values are real and important. But they are not universal. They are specific products of the Enlightenment tradition, rooted in liberal political philosophy, Kantian moral theory. and Anglo-American contract theory
The assumption of methodological individualism is perhaps the deepest: that the basic unit of moral consideration is the individual person, and that rights, interests, and harms are primarily analyzed at the individual level. Ubuntu ethics, Confucian relational ethics, many Indigenous frameworks, and Islamic jurisprudence all begin from a different starting point: the community, the relationship, the network of obligations, not the isolated individual. When these traditions are absent from AI ethics conversations, the resulting frameworks systematically miss the kinds of harms that fall on communities, relationships, and social structures rather than on individuals as such.
A second deep assumption is the separation of facts and values, the idea that AI systems should be technically neutral, with ethics added as an external constraint. Many non-Western frameworks do not recognize this separation: in Confucian thought, the good technician is already a moral person; in Ubuntu, a tool that disrupts relational harmony is defective as a tool, not just unethical as an application.
A third concern is epistemic: whose knowledge counts as a valid input to AI system design? Current dominant frameworks take expert technical knowledge and Western philosophical ethics as the authoritative inputs. Indigenous epistemologies, oral knowledge traditions, and community-based knowledge are systematically excluded, not because they have been evaluated and found wanting, but because the frameworks lack the categories to even register them.
If Africa's Ubuntu ethics were incorporated in AI ethics, this would strengthen values such as harmony, consensus, collective action as well as common good. The EU approach is based on Enlightenment values such as individual freedom, equal rights and protection against abuses by the state. The Chinese approach is grounded in Confucian values such as virtuous government, harmonious society, and social responsibilities.
— Ziesche, 'Non-Western Approaches to AI Ethics' (PhilArchive, 2022); PMC 'Ethics and Diversity in AI Policies' (2022)
The stakes are not merely academic. AI systems are being deployed globally, in healthcare, criminal justice, credit scoring, and content moderation, in contexts with vastly different cultural assumptions about what counts as fair, what counts as harm, who is a relevant stakeholder, and what obligations a technology developer owes to communities affected by their systems. A framework built on the assumption that individuals bear their interests privately will systematically miss harms that fall on intergenerational knowledge systems, on collective identities, on relational networks.
This path explores three non-Western frameworks in depth, Ubuntu, Confucianism. and Indigenous epistemologies
Quick reflection
Most major AI ethics frameworks use individual autonomy as a foundational value. What specifically would be missing from your analysis of an AI harm if you could only register harms to individuals, not to communities or relationships?