We already have models that can justify outputs with reasoning chains but should OpenAI push this further so the models can explain how they think in user‑understandable concepts (like humans do)? If yes, how? If no, what are the risks?