| Time |
Event |
| 9 - 9:30 a.m. |
Breakfast
Kellogg Global Hub, Outside Room L-120
|
| 9:30 - 10:00 a.m. |
Welcome Remarks
Kellogg Global Hub, Room L-120
|
| 10 - 10:45 a.m. |
Session 1: AI and the Future of Scientific Inquiry
Dashun Wang Kellogg Chair of Technology and Professor at the Kellogg School of Management, Northwestern University
Airplanes for the Mind: Principles for Scientists Flying AI Agents Steve Jobs famously called computers “bicycles for the mind”—a metaphor for the personal-computing era, emphasizing tools that let individuals go farther and faster with less effort. Today, AI agents demand a new metaphor. They are airplanes for the mind: heavier-than-air machines that should not fly but do. These “airplanes” have the potential to dramatically extend the scale, speed, and coordination of human cognition, while introducing new challenges for science and discovery. This talk will focus on these “airplanes” and ask what changes for science when AI systems move from passive tools to active partners, drawing on research from the science of science. Throughout history, discoveries have been made by humans. As AI systems increasingly participate in discovery, the central question is how we design human–AI collaboration to make science more accountable, reproducible, and ultimately transformative. The question is not whether machines replace scientists, but what kind of scientist emerges when we learn to fly.
Vipin Kumar Regents Professor, William Norris Land Grant Chair in Large-Scale Computing, University of Minnesota
Reimagining Scientific Discovery in the Age of Generative AI: From Data-Driven Models to Knowledge-Guided Intelligence Recent advances in generative AI and foundation models have enabled powerful data-driven approaches to scientific discovery. However, scientific problems often require more than pattern recognition: models must extrapolate reliably and remain consistent with established scientific principles. In such settings, scale alone is insufficient.
This talk introduces knowledge-guided machine learning, in which domain knowledge is embedded into model architectures, objective functions, and training procedures to provide explicit inductive bias. This approach complements data-driven learning and enables models that are both expressive and scientifically coherent, particularly in emerging generative AI systems where reliable out-of-distribution generalization is critical. Examples from environmental and Earth system science illustrate gains in generalization, interpretability, and physical consistency.
These considerations highlight the importance of aligning AI models with scientific principles. More broadly, efforts to make AI robust and reliable for scientific discovery raise foundational questions about how models are developed, interpreted, and applied, and suggest that advances in AI for science may also help inform future foundations of AI.
Tanya Berger-Wolf Professor of Computer Science Engineering, Electrical and Computer Engineering, and Evolution, Ecology, and Organismal Biology, Ohio State University
AI for Nature: From Science to Impact Computation has fundamentally changed the way we study nature. New data collection technologies, such as GPS, high-definition cameras, autonomous vehicles underwater, on the ground, and in the air, genotyping, acoustic sensors, and crowdsourcing, are generating data about life on the planet that are orders of magnitude richer than any previously collected. We have built AI systems that can make sense of the data these technologies generate. And with the explosion of large language models, much of this capability has been democratized, though not yet equitably, and not yet everywhere it is needed. But sensing is not understanding.
The need for understanding is more urgent than ever, and the challenges are great. We are losing biodiversity at an unprecedented rate, with species disappearing faster than we can name them. Moving AI from a tool that speeds up and scales up data collection and processing to one that genuinely understands ecosystems is the frontier that matters now. The talk will discuss how AI can help bridge the knowledge gap about living organisms, enabling scientific inquiry, conservation, and policy decisions, and present a vision and examples of AI as a trusted partner in a fundamentally human endeavor of the quest to understand and protect the natural world.
|
| 10:45 - 11:30 a.m. |
Session 2: AI, Mind, and Human-Centered Design
Moshe Vardi University Professor and the George Distinguished Service Professor in Computational Engineering, Rice University
Are AI Minds Genuine Minds? The question “Are AI minds genuine minds?” invites us to examine the nature of mind itself and whether artificial intelligence meets its defining criteria. A genuine mind is typically associated with consciousness, self-awareness, intentionality, and the capacity to experience mental states such as emotions. Whether AI qualifies as possessing a true mind ultimately depends on how we define the essential qualities of consciousness and intelligence. This question lies at the heart of the discussion.
Nicole Ellison Karl E. Weick Collegiate Professor of Information, School of Information, University of Michigan
Signals and Sources: A Communication Perspective on Emergent AI-Human Entanglements In this short talk, I will share some of my emergent thoughts around AI companions, AI-crafted interpersonal messages, and other AI-human interactions that pose fundamental questions for scholars and AI users. Consider, for instance, important relationship development or identity artifacts such as online dating profiles or birthday greetings. As these communication artifacts become easier to automate, what practices – such as purposefully introducing typographical errors to appear more human – may emerge in response? This talk will consider how cultural practices may be shifting in an age where norms have yet to be crystallized, social practices are diverse and evolving, and AI’s technical capabilities have yet to be fully developed. In other words: When do we need a human, and what evidence might we collect (or produce) to prove humanness when the machines excel at doing many of the things we used to consider uniquely human?
Andrea L. Guzman Associate Professor of Communication, Northern Illinois University
What Are We Doing? Scholars across disciplines are wrestling with multiple questions surrounding the implications of AI for research, including fierce debates over research and publishing ethics. Often such discussions focus on AI and its capabilities while overlooking the equally important element of humans as researchers, participants, and consumers. Research is neither a human process nor a machine process but one of human-machine communication. Critical to charting the future of research is examining the assumptions of the nature of humans and machines that have served as the foundation of current practices and ethics, and how AI is disrupting not only the capabilities of technology but also the role of humans and how we conceive of ourselves. A fundamental question for scholars to prioritize is “What are WE doing?” – in terms of our values, our identities, and how we conduct our research – instead of giving primary consideration to what AI can or cannot do.
|
| 11:30 a.m. - 12:15 p.m. |
Session 3: AI and the Transformation of Research Methods
Joerg Matthes Professor of Communication Science, Department of Communication, University of Vienna
Designing the Artificial: AI and the Next Generation of Quantitative Social Science Methods Artificial intelligence is rapidly transforming the world, including the ways we do research. This talk examines how generative and analytical AI systems are reshaping the entire research process in the social sciences (and beyond), from literature review and hypothesis generation to research design, data collection, analysis, writing, and peer review. Using concrete examples from automated content analysis, AI-generated experimental stimuli, simulation-based research, and hyperrealistic synthetic media, the talk illustrates both the promise and the risks of AI-driven research practices. Rather than advocating either uncritical adoption or rejection, the presentation calls for a human-centered approach grounded in epistemic responsibility and methodological rigor. The talk concludes by outlining key principles for integrating AI into communication research in ways that foster creativity and theoretical innovation, while preserving the central role of human judgment.
Yingdan Lu Assistant Professor, School of Communication, Northwestern University
Computational Video Analysis in the Era of Large Language Models Social science researchers are increasingly leveraging large-scale video data and computational methods to examine key concepts and questions. Yet much existing work relies on a narrow set of video features and focuses mainly on text or static visuals, leaving other high-dimensional features and modalities underexamined. This talk addresses these gaps by presenting empirical studies that use large language models (LLMs) and other computational methods to enable more systematic and comprehensive multimodal understanding. The talk will conclude with a discussion of future opportunities and challenges in this rapidly evolving field.
Winson Peng Professor, Department of Communication, Michigan State University
Beyond Plausible Behavior: Validating LLM-Driven Simulations for Social Science Large language models are making AI-based social simulation increasingly persuasive at the level of behavior. Yet behavioral fluency does not resolve the question of validation. This talk distinguishes three evaluation strategies in LLM research: benchmarking against existing benchmarks, benchmarking against human datasets, and evaluating against theoretically specified mechanisms. I argue that, for many social science purposes, the key question is not whether LLM agents align with human behavior, but whether they respond to theoretically meaningful variation under controlled conditions. Using a study of media engagement, I show how valid prompt manipulations can support mechanism-sensitive, theory-centered evaluation in AI-enabled social science.
|
| 12:15 - 2:00 p.m. |
Lunch
|
| 2 - 2:45 p.m. |
Session 4: Misinformation, Manipulation, and Public Resilience in the Age of AI
V.S. Subrahmanian Walter P. Murphy Professor of Computer Science and Co-Director of Northwestern Network for Collaborative Intelligence, Northwestern University
Covert Social Influence Operations: Past, Present, and Future Covert Social Influence Operations (CSIOs) have been studied for almost a dozen years. Since the first study of CSIOs in the 2014 Indian election and the DARPA Twitter Influence Bot Detection Challenge of 2015 under the SMISC Program, the field has come a long way. After a quick review of CSIOs of the past, this talk will quickly move on to how recent advances in AI will influence the direction of CSIOs. We can think of CSIOs as involving a threat actor (CSIO operator) targeting a defender (e.g., social platform). Though the extraordinary ability of modern AI to generate realistic text, image, video, audio, and multimodal content poses a potential threat, I will argue that the even more extraordinary ability of AI to dynamically adapt to changing circumstances and defender tactics will likely pose an even bigger threat.
Cuihua (Cindy) Shen Professor of Communication, UC Davis
Can Media Literacy Save Us? Media literacy is frequently championed as the ultimate defense against the tide of multimodal and synthetic misinformation. However, as generative AI matures, the technical sophistication of deception is rapidly outpacing traditional educational frameworks. This raises a fundamental question: Can media literacy truly save us, or are we placing a disproportionate burden on individual cognition to solve a systemic crisis? In this talk, I will challenge the "silver bullet" narrative by presenting results from multiple empirical studies that examine the efficacy and limitations of media literacy interventions, and discuss future research and policy directions to preserve the integrity of our shared information ecosystem.
Jeff Hancock Harry and Norman Chandler Professor of Communication and Senior Fellow at the Freeman Spogli Institute for International Studies, Stanford University
TBD
|
| 2:45 - 3:30 p.m. |
Session 5: Knowledge Production in the Age of AI
Agnes Horvat Associate Professor of Communication Studies, Northwestern University
AI-Assisted Writing and Decision-Making As artificial intelligence reshapes human-centered computing, it is increasingly important to examine how AI systems reshape human practices. In this talk, I explore the growing influence of large language models (LLMs) on both scientific writing and decision-making.
Drawing on an analysis of more than 15 million biomedical abstracts, I identify abrupt shifts in vocabulary that are consistent with LLM-assisted writing. These patterns suggest that a substantial share of recent abstracts, at least 13.5% in 2024, have been shaped by such systems. The findings point to a rapid and largely uncoordinated integration of LLMs into scholarly workflows, raising important questions about linguistic homogenization, evolving authorship norms, and what scholarly communication in an AI-mediated environment.
I will also present ongoing experimental research on AI-assisted decision-making. Using controlled experiments that model academic hiring as hidden-profile tasks, we examine how different forms of AI support influence outcomes. Specifically, we compare individual-level AI decision aids with group-level AI facilitators, assessing their effects on decision accuracy as well as participants’ satisfaction with the evaluation process.
By bringing together large-scale observational evidence and controlled experimental results, our work highlights how AI is augmenting human capabilities by restructuring the processes through which knowledge is produced and evaluated. Reimagining human-centered computing in this context requires designing systems that support human judgment while preserving accountability and control.
James Evans Max Palevsky Professor in Sociology and Data Science, University of Chicago
TBD
Amy Brand Director and Publisher, MIT Press
Who Owns Knowledge? Authorship, Attribution, and AI in the Research Ecosystem Re-imagining human-centered computing requires rethinking not just AI system design, but the governance of knowledge production itself. Generative AI now functions as an epistemic actor, actively producing, synthesizing, and circulating knowledge, and therefore fundamentally reshaping research practices. This transformation challenges core principles of scholarly communication, including authorship, attribution, and accountability. This talk examines how authorship frameworks should evolve to accommodate hybrid human–AI collaborations while preserving transparency and responsibility. Building on contributorship models such as CRediT and drawing on current initiatives at the MIT Press, I argue that scholarly publishing can serve as a critical intervention point for shaping trustworthy AI ecosystems, establishing norms and infrastructures that ensure rigorous, accountable knowledge production in an AI-mediated future.
|
| 3:30 - 4:00 p.m. |
Coffee/Snacks Break
|
| 4:00 - 5:30 p.m. |
Panel Discussion: Bridging Disciplinary Perspectives on AI in HCC Research
Panelists: TBD
|
| 5:30 - 5:45 p.m. |
Group Photograph
|
| 6:00 - 7:00 p.m. |
Reception
Evanston Corner Bistro, Lighthouse Room, 703 Church St, Evanston
|
| 7:00 - 9:30 p.m. |
Dinner and Lambert Talk
Ben Shneiderman Emeritus Professor in Computer Science, University of Maryland
Human-Centered AI: Amplify, Augment, Empower, and Enhance Human Performance A new synthesis is emerging that integrates AI technologies with Human-Computer Interaction to produce Human-Centered AI (HCAI). Advocates of this new synthesis seek to amplify, augment, and enhance human abilities to empower people, build their self-efficacy, support creativity, recognize responsibility, and promote social connections.
Researchers, developers, business leaders, policy makers, and others are expanding the technology-centered scope of Artificial Intelligence (AI) to include Human-Centered AI (HCAI) ways of thinking. This expansion from an algorithm-focused view, which embraces a human-centered perspective, guides evaluation by red team testing, expert reviews, and continuous improvement strategies. These strategies are especially important for Generative AI, which is impressively powerful, but alarmingly flawed. User interface design and governance strategies will be presented.
|