Explainable Reasoning is the requirement that users can understand how a response was produced. In the Nahra model, this does not mean exposing every internal computation. It means the user can inspect the source basis, supporting evidence, and reasoning path well enough to validate and rely on the output.