The degree to which a human can understand how a model produces its outputs. Interpretability is stronger than explainability and implies genuine understanding of internal mechanisms; truly interpretable models are often less capable than black-box alternatives.
See: Black box; Explainability; XAI