You're viewing as a guest. Sign in to save progress and pick up where you left off.
Step 7 of 7~9 min read
Reflection: Technology, Power, and Your Own Position
The most personally searching questions this framework raises: about the technology you use, the systems you work within, and what a genuinely critical and constructive response to algorithmic colonialism might look like from your specific position.
Prompts to consider
- Benjamin argues that the appearance of technical neutrality is itself a mechanism of domination: algorithmic systems that encode structural inequality look objective and thereby make the inequality harder to challenge. Think about the algorithmic systems that most directly affect your own life or work. What decisions do they make about you or on your behalf? Do you know the basis on which they make those decisions? And if you discovered that a system affecting you was encoding a structural bias against a group you belong to, what options would you actually have to challenge or exit that system?
- Birhane argues that genuinely liberatory technology must emerge from the needs of communities and be developed and controlled by those communities, not imported as pre-packaged 'solutions' by outside actors. Think about a technology currently being deployed in a community you are part of or know well (in healthcare, education, agriculture, criminal justice, or another domain). Was the community involved in identifying the problem the technology is designed to solve? Did community members have genuine input into the design? Does the community have ongoing control over how the system is governed and how its outputs are used? And what does your answer reveal about the gap between the rhetorical commitment to community benefit and the actual distribution of power in AI development?
- The decolonial AI ethics framework calls for a 'radical reimagining' of AI's purpose and potential. Think about what this would mean in practice for a domain you work in or care about. What would an AI system designed from the ground up to serve community needs rather than corporate interests look like? What would it prioritize, what would it refuse to do, and what institutional structures would be needed to ensure it remained accountable to those needs over time? And does engaging in this imaginative exercise reveal assumptions built into existing AI systems that you had previously taken for granted?
Write at least a few sentences, then you can request feedback or mark this step complete.