A prompting technique that encourages models to show intermediate reasoning steps, often improving accuracy on complex tasks. CoT reasoning may provide some transparency into model "thinking" but is not a substitute for true explainability; reasoning traces may be fabricated.