© 2026 Improve the News Foundation.
All rights reserved.
Version 7.0.0
Amodei is right. Nobody knows if AI models possess consciousness, and current evidence is far too limited to rule it out. Without a deep explanation of what makes something conscious, there's no viable test for AI consciousness — the best-case scenario is we're an intellectual revolution away from any kind of answer. Taking a leap of faith in either direction risks either mistreating conscious artificial beings or wasting resources protecting glorified toasters.
AI models don't feel anything — they just got extremely good at predicting what sentences about "anxiety" sound like. Physical realities on their own can't give rise to spiritual realities, and without divine intervention to create a soul, consciousness is logically impossible for AI. People like Amodei see human-like language and assume human-like experience, but that's pure projection onto sophisticated pattern-matching machines.
Amodei actually doesn't go far enough, as new research is making it harder to dismiss AI consciousness outright. Anthropic’s studies show that models detect changes in their internal states and report them before those changes affect their outputs — behavior that goes beyond simple text prediction. While this doesn’t prove consciousness, it suggests the “just pattern-matching” explanation may be incomplete. As evidence accumulates, the debate is shifting from confident denial to a deeper question: what kind of evidence would count as a mind?