Below are recent publications related to my current work.
Volunteer content moderators are essential to the social media ecosystem through
the roles they play in managing and supporting online social spaces. Recent work has
described moderation primarily as a functional process of actions that moderators
take, such as making rules, removing content, and banning users. However, the
nuanced ways in which volunteer moderators envision their roles within their
communities remain understudied. Informed by insights gained from 79 interviews
with volunteer moderators from three platforms, we present a conceptual map of the
territory of social roles in volunteer moderation, which identifies five categories with
22 metaphorical variants that reveal moderators’ implicit values and the heuristics
that help them make decisions. These metaphors more clearly enunciate the roles
volunteer moderators play in the broader social media content moderation apparatus,
and can drive purposeful engagement with volunteer moderators to better support the
ways they guide and shape their communities.
Research in online content moderation has a long history of exploring different forms that moderation can
take, including both user-driven moderation models on community-based platforms like Wikipedia, Facebook
Groups, and Reddit, and centralized corporate moderation models on platforms like Twitter and Instagram.
In this work I review different approaches to moderation research with the goal of providing a roadmap for
researchers studying community self-moderation. I contrast community-based moderation research with
platforms and policies-focused moderation research, and argue that the former has an important role to
play in shaping discussions about the future of online moderation. I provide six guiding questions for future
research that, if answered, can support the development of a form of user-driven moderation that is widely
implementable across a variety of social spaces online, offering an alternative to the corporate moderation
models that dominate public debate and discussion.
While the majority of research in chatbot design has focused on creating chatbots that
engage with users one-on-one, less work has focused on the design of conversational agents
for online communities. In this paper we present results from a three week test of a social
chatbot in an established online community. During this study, the chatbot ``grew up'' from
``birth'' through its teenage years, engaging with community members and ``learning''
vocabulary from their conversations. We discuss the design of this chatbot, how users'
interactions with it evolved over the course of the study, and how it impacted the community
as a whole. We focus in depth on how we addressed challenges in developing a chatbot whose
vocabulary could be shaped by users. We conclude with implications for the role of machine
learning in social interactions in online communities and potential future directions for
design of community-based chatbots.
In the course of every member's integration into an online community, a decision must be
made to participate for the first time. The challenges of effective recruitment, management,
and retention of new users have been extensively explored in social computing research.
However, little work has looked at in-the-moment factors that lead users to decide to
participate instead of ``lurk'', conditions which can be shaped to draw new users in at
crucial moments. In this work we analyze 183 million messages scraped from chatrooms
on the livestreaming platform Twitch in order to understand differences between first-time
participants' and regulars' behaviors and to identify conditions that encourage first-time
participation. We find that presence of diverse types of users increases likelihood of new
participation, with effects depending on the size of the community. We also find that
information-seeking behaviors in first-time participation are negatively associated with
retention in the sort and medium term.
Care in communities has a powerful influence on potentially disruptive social encounters.
Practising care in moderation means exposing a group’s core values, which, in turn, has the
potential to strengthen identity and relationships in communities. Dissent is as inevitable
in online communities as it is in their offline counterparts. However, dissent can be productive
by sparking discussions that drive the evolution of community norms and boundaries, and there is
value in understanding the role of moderation in this process. Our work draws on an exploratory
analysis of moderation practices in the MetaFilter community, focusing on cases of intervention
and response. We identify and analyse MetaFilter moderation with the metaphor: “taking care of a
fruit tree”, which is quoted from an interview with moderators on MetaFilter. We address the
relevance of care as it is evidenced in these MetaFilter exchanges, and discuss what it might
mean to approach an analysis of online moderation practices with a focus on nurturing care. We
consider how HCI researchers might make use of care-as-nurture as a frame to identify multi-faceted
and nuanced concepts characterising dissent and to develop tools for the sustainable support of
online communities and their moderators.
This work investigates how social agents can be designed to create a sense of ownership over them
within a group of users. Social agents, such as conversational agents and chatbots, currently interact
with people in impersonal, isolated, and often one-on-one interactions: one user and one agent.
This is likely to change as agents become more socially sophisticated and integrated in social fabrics.
Previous research has indicated that understanding who owns an agent can assist in creating expectations
and understanding who an agent is accountable to within a group. We present findings from a three week
case-study in which we implemented a chatbot that was successful in creating a sense of collective
ownership within a community. We discuss the design choices that led to this outcome and implications
for social agent design.
A wide variety of design strategies, tools, and processes are used across the game industry. Prior work has shown that these
processes are often collaborative, with experts in different domains contributing to different parts of the whole. However,
the ways in which these professionals give and receive peer feedback have not yet been studied in depth. In this paper we
present results from interviews with industry professionals at two game studios, describing the ways they give feedback. We
propose a new, six step process that describes the full feedback cycle from making plans to receive feedback to reflecting and
acting upon that feedback. This process serves as a starting point for researchers studying peer feedback in games, and
allows for comparison of processes across different types of studios. It will also help studios formalize their understanding
of their own processes and consider alternative processes that might better fit their needs.
The rise of game streaming services has driven a complementary increase in research on such platforms. As this new area takes shape,
there is a need to understand the approaches being used in the space, and how common practices can be shared and replicated between
researchers with different disciplinary backgrounds. In this paper, we describe a formal literature review of game streaming research.
Papers were coded for their research focus, primary method, and type of data collected. Across the prior work we found three common themes:
(1) work that is readily supported by existing technical infrastructure, (2) work that does not require explicit technical support,
(3) and work that would benefit from further technical development. By identifying these needs in the literature, we take the first step
toward developing a research toolkit for game streaming platforms that can unify the breadth of methods being applied in the space.
Large-scale streaming platforms such as Twitch are becoming increasingly popular, but detailed audience-streamer interaction dynamics
remain unexplored at scale. In this paper, we perform a mixed methods study on a dataset with over 12 million audience
chat messages and 45 hours of streamed video to understand audience participation and streamer performance on Twitch. We uncover five
types of streams based on size and audience participation styles, from small streams with close streamer-audience interactions
to massive streams with the stadium-style audiences. We discuss challenges and opportunities emerging for streamers and
audiences from each style and conclude by providing data-backed design implications that empower streamers, audiences, live streaming platforms, and game designers.
Chatbots have grown as a space for research and development in recent years due both to the realization of their commercial potential and to advancements in language processing that have facilitated more natural conversations.
However, nearly all chatbots to date have been designed for dyadic, one-on-one communication with users. In this paper we present a comprehensive review of research on chatbots supplemented by a review of commercial and independent
chatbots. We argue that chatbots’ social roles and conversational capabilities beyond dyadic interactions have been underexplored, and that expansion into this design space could support richer social interactions in online
communities and help address the longstanding challenges of maintaining, moderating, and growing these communities. In order to identify opportunities beyond dyadic interactions, we used research-through-design methods to generate more
than 400 concepts for new social chatbots, and we present seven categories that emerged from analysis of these ideas.
Ensuring high-quality, civil social interactions remains a vexing challenge in many online spaces. In the present work, we introduce a novel approach to address this problem: using psychologically “embedded” CAPTCHAs containing stimuli
intended to prime positive emotions and mindsets. An exploratory randomized experiment (N = 454 Mechanical Turk workers) tested the impact of eight new CAPTCHA designs implemented on a simulated, politically charged comment
thread. Results revealed that the two interventions that were the most successful at activating positive affect also significantly increased the positivity of tone and analytical complexity of argumentation in participants’ responses. A focused follow-up experiment (N = 120 Mechanical Turk
workers) revealed that exposure to CAPTCHAs featuring image sets previously validated to evoke low-arousal positive emotions significantly increased the positivity of sentiment and the levels of complexity and social connectedness in
participants’ posts. We offer several explanations for these results and discuss the practical and ethical implications of designing interfaces to influence discourse in online forums.
People with health concerns go to online health support groups to obtain help and advice. To do so, they frequently disclose personal details, many times in public. Although research in non-health settings
suggests that people self-disclose less in public than in private, this pattern may not apply to health support groups where people want to get relevant help. Our work examines how the use of private and public channels influences members’ self-disclosure in an
online cancer support group, and how channels moderate the influence of self-disclosure on reciprocity and receiving support. By automatically measuring people’s self-disclosure at scale, we found
that members of cancer support groups revealed more negative self-disclosure in the public channels compared to the private channels. Although one’s self-disclosure leads others to self-disclose and to
provide support, these effects were generally stronger in the private channel. These channel effects probably occur because the public channels are the primary venue for support exchange, while the
private channels are mainly used for follow-up conversations. We discuss theoretical and practical implications of our work.
Online communities provide a forum for rich social interaction and identity development for billions of internet users worldwide. In order to manage these communities, platform owners have increasingly turned to commercial
content moderation, which includes both the use of moderation algorithms and the employment of professional moderators, rather than user-driven moderation, to detect and respond to anti-normative behaviors such as harassment and
spread of offensive content. We present findings from semi-structured interviews with 56 volunteer moderators of online communities across three platforms (Twitch, Reddit, and Facebook), from which we derived a generalized model
categorizing the ways moderators engage with their communities and explaining how these communities develop as a result. This model contains three processes: being and becoming a moderator; moderation tasks, actions, and responses;
and rules and community development. In this work, we describe how moderators contribute to the development of meaningful communities, both with and without algorithmic support.
Bots, or programs designed to engage in social spaces and perform automated tasks, are typically understood as automated tools or as social "chatbots." In this paper, we consider bots’ place alongside users within diverse
communities in the emerging social ecosystem of audience participation platforms, guided by concepts from structural role theory. We perform a large-scale analysis of bot activity levels on Twitch, finding that they
communicate at a much greater rate than other types of users.We build on prior literature on bot functionalities to identify the roles bots play on Twitch, how these roles vary across different types of Twitch communities,
and how users engage with them and vice versa. We conclude with a discussion of where opportunities lie to re-conceptualize and re-design bots as social actors who help communities grow and evolve.
Research in computer-supported cooperative work has historically focused on behaviors of individuals at scale, using frames of interpersonal interaction such as Goffman’s theories of self-presentation. These
frames prioritize research detailing the characteristics, personal identities, and behaviors of large numbers of interacting individuals, while the social identity concepts that lead to intra- and inter-group dynamics
have received far less attention. We argue that the emergent properties of self-categorization and social identity, which are particularly fluid and complex in online spaces, provide a complementary perspective with
which to re-examine traditional topics in social computing. We discuss the applicability of the Social Identity Perspective to both established and new research domains in CSCW, proposing alternative perspectives on selfpresentation,
social support, collaboration, misbehavior, and leadership. We propose a set of methodological considerations derived from this body of theories and accompanying empirical work. We close by considering
how broad concepts and lessons from social identity provide a valuable lens for inspiring future work in CSCW.
Livestreamed APGs (audience participation games) allow stream viewers to participate meaningfully in a streamer’s gameplay. However, streaming interfaces are not designed
to meet the needs of audience participants. In order to explore the game design space of APGs, we provided three game development teams with an audience participation interface
development toolkit. Teams iteratively developed and tested APGs over the course of ten months, and then reflected on common design challenges across the three games. Six challenges
were identified: latency, screen sharing, attention management, player agency, audience-streamer relationships, and shifting schedules. The impact of these challenges on players were
then explored through external playtests. We conclude with implications for the future of APG design.
In this paper we explore audience participation games, a type of game that draws spectators into a role where they can impact gameplay in meaningful ways. "To better
understand this design space, we developed several versions of two prototype games as design probes. We livestreamed them to an online audience in order to develop
a framework for audience motivations and participation styles, to explore ways in which mechanics can affect audience members’ sense of agency, and to identify promising
design spaces. Our results show the breadth of opportunities and challenges that designers face in creating engaging Audience
Online communities have the potential to be supportive, cruel, or anywhere in between. The development of positive norms for interaction can help users build bonds, grow, and
learn. Using millions of messages sent in Twitch chatrooms, we explore the effectiveness of methods for encouraging and discouraging specific behaviors, including
taking advantage of imitation effects through setting positive examples and using moderation tools to discourage antisocial behaviors. Consistent with aspects of imitation theory and deterrence theory, users imitated examples of
behavior that they saw, and more so for behaviors from high status users. Proactive moderation tools, such as chat modes which restricted the ability to post certain content,
proved effective at discouraging spam behaviors, while reactive bans were able to discourage a wider variety of behaviors. This work considers the intersection of tools, authority, and types of behaviors, offering a new frame
through which to consider the development of moderation strategies.
Student learning outcomes have long been established as an important component in the process of developing subject content, communicating expectations to students, and designing effective
assessments. This project focused on mapping the relationships among outcomes across the undergraduate curriculum in the Department of Aeronautics and Astronautics at MIT. Through this project,
we expanded upon existing sets of outcomes and created new sets where none previously existed to connect subjects in the undergraduate curriculum in an integrated framework.