You're viewing as a guest. Sign in to save progress and pick up where you left off.
Step 7 of 7~8 min read
Reflection: What You're Actually Doing When You Talk to an AI
Sit with the philosophical strangeness of the situation you're in right now.
Prompts to consider
- When you use an AI assistant, this one, or others, do you find yourself attributing understanding to it? Do you catch yourself saying 'it knows' or 'it thinks' or 'it meant'? What does your spontaneous attribution reveal about how you actually experience it, and is that experience evidence of anything, or just a projection?
- The symbol grounding problem says that 'hot' means something to you because you've been burned, that meaning is grounded in embodied experience. Try to identify a word whose meaning, for you, is deeply rooted in a specific bodily or sensory memory. What would be lost in a system that had the statistical profile of that word but had never had the experience it refers to?
- If LLMs do have genuine understanding, or something close to it, what would follow morally? Would your relationship to AI assistants need to change? Is there something you'd owe them, or something you'd need to stop doing? And if they don't have understanding, if it's all syntax, does that change how you read what they produce?
Write at least a few sentences, then you can request feedback or mark this step complete.