10th Dean’s Global Forum examines AI, governance, and public benefit with Alondra Nelson
Alondra Nelson, Harold F. Linder Professor of Social Science, joined Dean and CEO Marwan M. Kraidy for a community discussion on artificial intelligence, public responsibility, and governance at Northwestern University in Qatar’s 10th Dean’s Global Forum, titled “AI: Purpose, Governance and Public Benefit.”
The conversation began with Nelson reflecting on her intellectual journey and what led her to the intersection of science, race, society, and technology. She traced her path to her undergraduate years, recalling that she “grew up a STEM kid” before discovering four-field anthropology, an interdisciplinary approach that shaped her lifelong commitment to bridging science, social science, and the humanities. This foundation, she explained, continues to inform her belief that “AI has always been about people”. She noted, “My work has never treated artificial intelligence as abstract data or code, but as something deeply embedded in human experience, power, and social systems.”
The conversation continued, focusing on AI’s impact on research practices. Nelson discussed the “black box” problem in generative AI, noting challenges of explainability, reliability, and accountability. “We already have a replicability crisis; imagine that crisis at scale.” She linked this to agnotology, observing that “the black box is leveraged to disavow responsibility and liability.”
In discussing the implications of generative AI for education, Nelson observed that higher education has long adapted to technological change, from word processors to digital research tools. She encouraged moving beyond a narrow focus on misconduct toward more fundamental questions about pedagogy, learning objectives, and assessment.
She went on to encourage educators to leverage the opportunities enabled by AI and reconsider how assignments are designed, the skills are being cultivated, and how learning outcomes are evaluated. She also emphasized the importance of creating structured environments in which students can experiment responsibly with AI technologies, saying, “higher education has long adapted to technological change, from word processors to digital research tools.” She added, “The introduction of AI should prompt us to rethink pedagogy, learning objectives, and assessment, not simply focus on misconduct,” she said.
When asked how AI is impacting issues of digital divide, race, governance, and the environment, Nelson emphasized that many AI technologies are still in development and that their impacts can be complex and varied. She highlighted the importance of public engagement, ongoing research, and adaptive governance to better understand these technologies.
Drawing on her role in developing the AI Bill of Rights, and her experience as a social scientist, Nelson emphasized that, “AI systems must be grounded in principles of safety, privacy, fairness, transparency, and human recourse.” She added, “These are not abstract ideals, but necessary conditions for building public trust and ensuring AI serves the public good.”





