Q. Vera Liao: Human-Centered AI Transparency: Bridging the Sociotechnical Gap

Abstract: Transparency—enabling appropriate understanding of AI technologies—is considered a pillar of Responsible AI. The AI community have developed an abundance of techniques in the hope of achieving transparency, including explainable AI (XAI), model evaluation, and uncertainty quantification. However, there is an inevitable sociotechnical gap between these computational techniques and the nuanced and contextual human needs for understanding AI. Mitigating the sociotechnical gap has long been a mission of the HCI research community, but the age of AI has brought new challenges to this mission. In this talk, I will discuss these new challenges and some of our approaches to bridging the sociotechnical gap for AI transparency: conducting critical investigation into dominant AI transparency paradigms; studying people’s transparency needs in diverse contexts; and shaping technical development by embedding sociotechnical perspectives in the evaluation practices.