Anthropic, Google, and OpenAI roll out ‘chains of thought’ to crack AI decision-making
Anthropic, Google, and OpenAI have launched a new approach called “chains of thought” to get AI systems to explain their decision process step-by-step. The goal: make AI reasoning clearer and more reliable.
The technique breaks complex tasks into smaller reasoning steps inside the model, rather than delivering only a final answer. This helps researchers and developers better understand how the AI arrives at its conclusions.
It’s a move toward making AI less of a black box. The chains of thought method aims to improve transparency and trust in systems that often operate behind closed doors.
“By training models to generate intermediate reasoning steps, chains of thought allow us to better understand what models think,” OpenAI researchers wrote.
The launch follows growing pressure on AI companies to boost interpretability amid debates about AI safety and ethics. Industry insiders see chains of thought as an essential step to diagnosing and fixing model biases or errors.
Users can expect clearer output explanations and improved performance on reasoning-heavy tasks from AI tools that incorporate this method.