Skip to main content

Speakers and Schedule

Workshop Speakers and Schedule

Thursday, May 18, 2022

Time Event
5:30 - 8:30 p.m.
Welcome and Opening Reception

LeTour, Warhol Room, 625 Davis St, Evanston

Friday, May 19

Time Event
8 - 8:30 a.m.
Breakfast
8:30 - 8:45 a.m.
Welcome and Opening Remarks

Watch welcome and opening remarks

Kellogg Global Hub, Seminar Room 4101

Francesca Cornelli

Francesca Cornelli
Dean of the Kellogg School of Management, Donald P. Jacobs Chair in Finance, Professor in Finance, Northwestern University

ottino-julio.jpeg

Julio M. Ottino
Dean of the McCormick School of Engineering, Distinguished Robert R. McCormick Institute Professor and Walter P. Murphy Professor of Chemical and Biological Engineering, Northwestern University

8:45 - 10:15 a.m.
Session 1
Session Chair: Ignacio Cruz
mary-beth-watson-manheim-130x130.jpg

Mary Beth Watson-Manheim
Professor and Department Head of Managerial Studies, University of Illinois Chicago
Watch Watson-Manheim's presentation

Adoption of AI in Context: Human-Digital Configuration Work and Implications
A primary goal of most organizational AI deployment is to reduce human labor while enhancing the work process. The technology is envisioned to improve established patterns of work and enhance current work systems; in other words, to improve a known situation. However, AI is not a narrow set of technologies with specific, pre-determined applications but is open and contingent offering myriad possibilities for action which are context dependent and emergent. We suggest that AI adoption requires human expertise and ingenuity to “figure out” in context how to integrate the technology into work practices and organizational systems. This “figuring out” process is likely to lead to unexpected reconfiguration of work patterns and practices. We label this process human-digital configuration work.

We illustrate the emergence of unexpected outcomes in a case study of digital transformation in the banking industry. AI deployment was central to the digital transformation strategy. We uncover disruptions in employee work processes with positive and negative consequences for interpersonal interactions. Specifically, we identify two different forms of algorithmic technologies used by employees. The users’ actions and interactions in the adoption process created changes in patterns and nature of established work practices. There were significant consequences, particularly for the quality and quantity of social interactions. We discuss the implications of these changes for social relationships and human connectedness, as well as the meaning of the work. Moreover, we propose that the stabilization of new human-digital configurations with existing work practices may challenge individual and professional identity as well as the deep structure and identity of the organization.

Hatim Rahman

Hatim Rahman
Assistant Professor of Management and Organizations, Kellogg School of Management, Northwestern University
Watch Rahman's presentation

Control in the Age of Algorithms: Exploring the Cold Start Problem and Reputational Interdependence on Online Labor Markets
Scholars have developed an intimate understanding of how people use social networks to navigate traditional labor markets. In my ethnographic study of workers in one of the largest online labor platform markets, I found people could not rely on their existing social networks, in part because online platforms primarily rely on algorithms to control people's mobility. In the absence of existing social ties on the platform, I found inexperienced workers encounter the "cold start" problem and detail the consequences this problem had for workers’ careers. Second, for experienced workers who obtain a rating evaluation, the workers’ encounter what I call "reputational interdependence": the platform's algorithms share workers' rating evaluations within and across other online networks, without workers' consent. Together, I theorize how platforms' use of algorithms to control workers introduces challenges in ways that depart from prior literature and advances our understanding of networks in the age of algorithms.

10:15 - 10:30 a.m.
Break
10:30 a.m. - Noon
Session 2
Session Chair: Jennifer Cutler
Nancy Cooke

Nancy Cooke
Professor of Human Systems Engineering, Arizona State University
Watch Cooke's presentation

Trusted Distributed Human-Machine Teaming for Safe and Effective Space-Based
A challenge of space-based missions is effective teaming in a geographically and temporally (i.e., spatio-temporal) distributed environment. The geographic distribution of teammates coupled with the variable communication latency challenges effective teamwork. This challenge is exacerbated by the team complexity in a heterogeneous multiteam system composed of humans, robots, and Artificial Intelligent (AI) agents. The long-term objective of this research is to develop an AI agent that monitors distributed human machine teams (HMTs) in space-based missions to identify potential team states (e.g., fatigue, conflict, trust) and intervene when needed to improve teamwork and team effectiveness. We have identified challenges from experts in space operations, developed a scenario to reflect those challenges, and identified sensor data for HMT monitoring.

Malte Jung

Malte Jung
Associate Professor of Information Science, Cornell University
Watch Jung's presentation

Teamwork with Robots
Research on Human-robot Interaction to date has largely focused on examining a single human interacting with a single robot. This work has led to advances in fundamental understanding about the psychology of human-robot interaction (e.g. how specific design choices affect interactions with and attitudes towards robots) and about the effective design of human-robot interaction (e,g. how novel mechanisms or computational tools can be used to improve HRI). However, the single-robot-single-human focus of this growing body of work stands in stark contrast to the complex social contexts in which robots are increasingly placed. While robots increasingly support teamwork across a wide range of settings covering search and rescue missions, minimally invasive surgeries, space exploration missions, or manufacturing, we have limited understanding of how robots affect team dynamics and how we design robots to support groups of people. In this talk I present empirical findings from several studies that show how robots can shape in direct but also subtle ways how people interact and collaborate with each other in teams.

Noon - 1 p.m.
Lunch
1 - 2:30 p.m.
Session 3
Session Chair: Matt Groh
Melissa Valentine

Melissa Valentine
Associate Professor of Management Science and Engineering, Stanford University
Watch Valentine's presentation

Becoming Informated: How Expert Occupations Gain Reskilling During Algorithm Development and Use
Many studies explore how expert occupations adopt new algorithmic systems but reveal experts’ resistance because of the increased surveillance, standardization, and loss of control these systems can involve. Other studies acknowledge that some occupations can experience a valued reskilling when they use algorithms, becoming more “informated” or “augmented” in their decision-making. However, missing from this research is an understanding of when and how new algorithms enable different occupations to undergo valued reskilling versus deskilling. In this paper, we present an ethnographic study of data scientists’ algorithmic development process that produced reskilling for their domain experts, and their retail company’s fashion buyers. To use the algorithmic system, the buyers struggled to gain new intellective (i.e., conceptual thinking) skills, including 1) explicitly articulating the theories driving their decision-making and then 2) proposing, conducting, and evaluating tests of those theories using the algorithm. The data scientists engaged in reskilling practices during their user-centered algorithm design process to support the buyers’ learning: they structured ongoing interactions with the buyers, wherein they 1) asked open-ended questions to help elicit and formalize the buyers’ theories, 2) added system features to support the buyers’ understanding, and 3) conducted trainings focused on metaphors and framings that would develop the buyers’ intuition of how the algorithm worked. Our study identifies conditions and practices through which expert occupations can become informated rather than deskilled through algorithm development and use.

Paul Sajda

Paul Sajda
Professor of Biomedical Engineering, Electrical Engineering, and Radiology (Physics), Columbia University

Watch Sajda's presentation

Physiologically-Informed Artificial Intelligence
Artificial Intelligence (AI) systems are advancing at a rapid pace, with new systems being realized on almost a daily basis. Many of these systems rely on unsupervised training on extremely large data sets (billions of tokens of text, images, etc). Then they use a relatively small amount of supervised training data generated by humans. Using supervised training data is critical for tuning the models to specific contexts. However, acquiring such data is often costly because it requires human-in-the-loop expertise. In this talk, I will describe a new way to incorporate human-in-the-loop learning based on physiologically-based labeling and state inference. Termed physiologically-Informed artificial intelligence (PI-AI) the framework tracks cognitive state changes related to attention reorienting and arousal, which can be measured non-invasively via electroencephalography (EEG), electrocardiography (ECG), electrodermal activity (EDA) pupillometry, and eye-tracking. We show that such an approach can be used to build AI models that are highly personalized to individual preferences, without the individual having to overtly express those preferences. We also hypothesize scenarios where the use of PI-AI may increase trust been humans and agents via just-in-time interventions that build “bonds” as one would expect in a team.

2:30 - 2: 45 p.m.
Break
2:45 - 4: 15 p.m.
Session 4
Session Chair: Agnes Horvat
Lionel Robert

Lionel P. Robert Jr.
Professor of Information and Associate Dean of Faculty Development and Faculty Affairs, School of Information, University of Michigan
Watch Robert's presentation

A Multi-Study Analysis of Repairing Human-Robot Trust Using Theory of Mind and Relational Demography Theory
We illustrate the emergence of unexpected outcomes in a case study of digital transformation in the banking industry. AI deployment was central to the digital transformation strategy. We uncover disruptions in employee work processes with positive and negative consequences for interpersonal interactions. Specifically, we identify two different forms of algorithmic technologies used by employees. The user's actions and interactions in the adoption process created changes in patterns and the nature of established work practices. There were significant consequences, particularly for the quality and quantity of social interactions. We discuss the implications of these changes for social relationships and human connectedness, as well as the meaning of the work. Moreover, we propose that the stabilization of new human-digital configurations with existing work practices may challenge individual and professional identity as well as the deep structure and identity of the organization.

Moshe Vardi

Moshe Vardi
University Professor, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University
Watch Vardi's presentation

Technology and Democracy
U.S. society is in the throes of deep societal polarization that not only leads to political paralysis but also threatens the very foundations of democracy. The phrase "The Disunited States of America" is often mentioned. Other countries are displaying similar polarization. How did we get here? What went wrong?

In this talk, I argue that the current state of affairs is the result of the confluence of two tsunamis that have unfolded over the past 40 years. On one hand, there was the tsunami of technology – from the introduction of the IBM PC in 1981 to the current domination of public discourse by social media. On the other hand, there was a tsunami of neoliberal economic policies. I will argue that the combination of these two tsunamis led to both economic polarization and cognitive polarization.

4:15 - 4:30 p.m.
Remarks

Watch Johnson's remarks

E Patrick Johnson

E. Patrick Johnson
Dean of the School of Communication and Annenberg University Professor at Northwestern University

 

4:30 - 4:45 p.m.
Group Photo
4:45 - 5:15 p.m.
Reception

Kellogg Global Hub, White Auditorium

5:15 - 6:30p.m.
Presentation & Performance
Stephen Alltop

Stephen Alltop
Senior Lecturer, Conducting and Ensemble, Bienen School of Music, Northwestern University and Orchestra

6:45 - 8:15 p.m.
Dinner
7:00 - 8:00 p.m.
Diversity in AI Panel (during dinner)

Watch panel on diversity in AI

Artificial intelligence systems and machine learning algorithms are wonderful artifacts of human accomplishment and scientific rigor. The modern software tools that are now readily available and enthusiastically applied to many of our lived experiences help make interactions with technology more seamless, convenient, and efficient - for some people. Unfortunately, the AI ecosystem, including its development, deployment, and user interaction, must address questions related to bias, ethics, and diversity throughout its implementation. Recent scholarship and mainstream media attention have heightened awareness to the ways in which a lack of diversity has resulted in the development of tools that potentially increase marginalization and discrimination, ignore the cultures and histories of different groups, and undermine the freedoms of some its users. As a result, we view this topic as being a core issue that must be highlighted as part of the workshop on Human AI Social Networks @ Work.

Moderator
Marlon Twyman II

Marlon Twyman II
Assistant Professor of Communication, University of Southern California Annenberg School for Communication and Journalism


Panelists
Ray Reagans

Ray Reagans
Alfred P. Sloan Professor of Management, Associate Dean for Diversity, Equity, and Inclusion, MIT Sloan School of Management

Martin Prescher

Martin Prescher
Executive Vice President and CTO, Autonomy

andrea_guzman-new.jpg

Andrea Guzman
Associate Professor of Journalism, Northern Illinois University

Saturday, May 20

Time Event
8 - 8:45 a.m.
Breakfast
8:45 - 11 a.m.
Session 5

Kellogg Global Hub, Seminar Room 4101

Session Chair: Liz Gerber
Hyejin Youn

Hyejin Youn
Associate Professor of Management and Organizations, Kellogg School of Management, Northwestern University
Watch Youn's presentation

Deconstructing Human Capital to Construct Nestedness
The underlying structure of workplace skills determines what skills are acquired at school, at work, and in what order. We construct a nested skill structure according to the degree to which skill is widely demanded across occupations, uncovering their horizontal and vertical dependencies on other skills. The resulting tree structure is unbalanced, with some branches coming out of a sturdy trunk, deeply rooted, while other branches lack such support. We find skills on branches with strong support enjoy higher wage premiums than those without. Analysis of individuals’ career changes and demographic age of occupations shows that those nested skills are needed more as one moves up the career ladder. It was unexpected to find that specialization requires significant investment and strengthening of general skills, bolstering the trunk and root of the nested skill tree. Finally, historical changes in occupation skill requirements show that these branches have become more fragmented over the decade, suggesting the increasing labor gap. We can explain the geographic and demographic disparities in wealth based on the distribution of their skills.

Sabine Brunswicker

Sabine Brunswicker
Professor for Digital Innovation, Founder and Director of the Research Center for Open Digital Innovation, Purdue University
Watch Brunswicker's presentation

Social Intelligence in AI Models for Successful Human-AI Teams
With the advancement in data-driven machine learning (ML) modeling (e.g. deep learning), and natural language processing, combined with increasing capabilities and sinking costs of cloud-based computation and data storage, artificial intelligence (AI) is transforming everyday life. AI-enabled software applications like Netflix recommenders or conversational agents like Alexa or Siri have demonstrated their ability to team up with human agents through direct or indirect social interactions to assist them in day-to-day tasks like finding a movie that matches their taste. Proponents of AI argue that successful AI-human teaming models may also solve societal challenges such as equal access to justice, or social dilemmas like energy conservation. However, AI literature and AI practice suggest that existing teaming models fail to solve such challenges due to the lack of considering social motives and socialization when designing human-AI interactions. In this talk, I will present ongoing research that aims to tackle the challenge of designing AI with a focus on “social intelligence”. Specifically, I will present two work-in-progress studies: One study focuses on the role of empathy in conversational AI, in which an AI-based agent uses natural language to assist a human in solving a legal challenge. Drawing upon theories of linguistics that explain how rhetorical and syntactical elements of natural language impact a human’s willingness to socialize with and trust other actors’ recommendations, we design randomized controlled online experiments to explain the effect of empathy in conversational AI for successful AI-human teaming. The second study integrates theories of human reinforcement learning behavior in collective action games with the advancement of deep reinforcement learning (Q-learning). Specifically, we design a deep reinforcement learning model that aims to trigger cooperative behavior among groups of human and artificial agents through social norms by learning a utility function that optimizes collective outcomes rather than individual interest. Using large-scale simulation, we demonstrate that deep learning agents whose AI models seek to optimize social norms in groups of human and artificial agents through their own actions can nudge human agents towards more cooperative behavior.

Balazs Vedres

Balazs Vedres
Professor of Sociology and Social Anthropology and of Network and Data Science, Central European University
Watch Vedres's presentation

Bots Reshape Human Collaborations in the Wild
When bots appear in human collaborations, one of the most profound potential transformations (and one of the least studied) is an impact on human sociability. In publics, where the generation of consensus and a joint definition of the public good is at stake, a disruption by bots can have severe consequences. We analyze cases of large collaborative publics online (Wikipedia and Twitter), where bots appear as fully capable participants. We adopt a difference-in-difference design, where our unit of analysis is a history of a human-to-human dyad, centered on the moment when a bot establishes a connection to one of the humans. We compare the sentiment of communication, the weight of the collaborative tie, and the probability of tie termination in bot-exposed and matched (unexposed) dyads. We found that for both of our cases that the appearance of a bot disrupts human connectivity. Discussions become less rational (with heightened sentiment), the weight of collaboration decreases, and the probability of partner loss increases.

11 - 11:15 a.m.
Break
11:15 a.m. - 12:15 p.m.
Boxed Lunch and Panel Discussion on Human-AI Teaming

Watch panel on human-AI teaming

Panelists
Tara Behrend

Tara Behrend
Associate Professor of Psychological Sciences, Purdue University

Georgia Chao

Georgia Chao
Professor of Psychology, University of South Florida

Dan Cosley

Dan Cosley
Program Officer at National Science Foundation

Javier Omar Garcia

Javier Omar Garcia
Chief, Hybrid Human-Technology Intelligence Branch, Army Research Laboratory

12:15- 12:30 p.m.
Workshop Wrap-Up

Watch day 2 closing remarks

Back to top