Gillian Hadfield, from the Schwartz Reisman Institute for Technology and Society, calls to attention the need to reframe the conversation around explainable AI. Indeed, in an illuminating piece on the topic, she brings into view the inadequacy of the field of explainable AI to actually “explain” decisions made about individuals in a language that speaks to them. She points out that regulations, such as Canada’s privacy reform Bill C-11, grant the right to explanation to individuals affected by systems, hence, the explanations should be aimed at them. In her views, times call to justifiable AI, an assurance that « the decisions that affect us are justifiable according to the rules and norms of our society ». Read the piece here.