Joseph Seering

Below are recent publications related to my current work.

2024


Abstract

Moderating online spaces effectively is not a matter of simply taking down content: moderators also provide private feedback and defuse situations before they cross the line into harm. However, moderators have little tool support for these activities, which often occur in the backchannel rather than in front of the entire community. In this paper, we introduce Chillbot, a moderation tool for Discord designed to facilitate backchanneling from moderators to users. With Chillbot, moderators gain the ability to send rapid anonymous feedback responses to situations where removal or formal punishment is too heavy-handed to be appropriate, helping educate users about how to improve their behavior while avoiding direct confrontations that can put moderators at risk. We evaluated Chillbot through a two week field deployment on eleven Discord servers ranging in size from 25 to over 240,000 members. Moderators in these communities used Chillbot more than four hundred times during the study, and moderators from six of the eleven servers continued using the tool past the end of the formal study period. Based on this deployment, we describe implications for the design of a broader variety of means by which moderation tools can help shape communities' norms and behavior.

PDF | ACM DL link pending


2023


Abstract

In the summer of 2021, users on the livestreaming platform Twitch were targeted by a wave of “hate raids,” a form of attack that overwhelms a streamer’s chatroom with hateful messages, often through the use of bots and automation. Using a mixed-methods approach, we combine a quantitative measurement of attacks across the platform with interviews of streamers and third-party bot developers. We present evidence that confirms that some hate raids were highly-targeted, hate-driven attacks, but we also observe another mode of hate raid similar to networked harassment and specific forms of subcultural trolling. We show that the streamers who self-identify as LGBTQ+ and/or Black were disproportionately targeted and that hate raid messages were most commonly rooted in anti-Black racism and antisemitism. We also document how these attacks elicited rapid community responses in both bolstering reactive moderation and developing proactive mitigations for future attacks. We conclude by discussing how platforms can better prepare for attacks and protect at-risk communities while considering the division of labor between community moderators, tool-builders, and platforms.

PDF | ACM DL


Abstract

Volunteer moderators are an increasingly essential component of effective community management across a range of services, such as Facebook, Reddit, Discord, YouTube, and Twitch. Prior work has investigated how users of these services become moderators, their attitudes towards community moderation, and the work that they perform, largely through interviews with community moderators and managers. In this paper, we analyze survey data from a large, representative sample of 1,053 adults in the United States who are active Twitch moderators. Our findings – examining moderator recruitment, motivations, tasks, and roles – validate observations from prior qualitative work on Twitch moderation, showing not only how they generalize across a wider population of livestreaming contexts, but also how they vary. For example, while moderators in larger channels are more likely to have been chosen because they were regular, active participants, mods in smaller channels are more likely to have had a pre-existing connection with the streamer. We similarly find that channel size predicts differences in how new moderators are onboarded and their motivations for becoming moderators. Finally, we find that moderators’ self-perceived roles map to differences in the patterns of conversation, socialization, enforcement, and other tasks that they perform. We discuss these results, how they relate to prior work on community moderation across services, and applications to research and design in volunteer moderation.

PDF | ACM DL


2022


Abstract

With increasing attention to online anti-social behaviors such as personal attacks and bigotry, it is critical to have an accurate accounting of how widespread anti-social behaviors are. In this paper, we empirically measure the prevalence of anti-social behavior in one of the world’s most popular online community platforms. We operationalize this goal as measuring the proportion of unmoderated comments in the 97 most popular communities on Reddit that violate eight widely accepted platform norms. To achieve this goal, we contribute a human-AI pipeline for identifying these violations and a bootstrap sampling method to quantify measurement uncertainty. We find that 6.25% (95% Confidence Interval [5.36%, 7.13%]) of all comments in 2016, and 4.28% (95% CI [2.50%, 6.26%]) in 2020-2021, are violations of these norms. Most anti-social behaviors remain unmoderated: moderators only removed one in twenty violating comments in 2016, and one in ten violating comments in 2020. Personal attacks were the most prevalent category of norm violation; pornography and bigotry were the most likely to be moderated, while politically inflammatory comments and misogyny/vulgarity were the least likely to be moderated. This paper offers a method and set of empirical results for tracking these phenomena as both the social practices (e.g., moderation) and technical practices (e.g., design) evolve.

PDF | ACM DL


Abstract

While most moderation actions on major social platforms are performed by either the platforms themselves or volunteer moderators, it is rare for platforms to collaborate directly with moderators to address problems. This paper examines how the group-chatting platform Discord coordinated with experienced volunteer moderators to respond to hate and harassment toward LGBTQ+ communities during Pride Month, June 2021, in what came to be known as the “Pride Mod” initiative. Representatives from Discord and volunteer moderators collaboratively identified and communicated with targeted communities, and volunteers temporar- ily joined servers that requested support to supplement those servers’ existing volunteer moderation teams. Though LGBTQ+ communities were subject to a wave of targeted hate during Pride Month, the communities that received the requested volunteer support reported having a better capacity to handle the issues that arose. This paper reports the results of interviews with 11 moderators who participated in the initiative as well as the Discord employee who coordinated it. We show how this initiative was made possible by the way Discord has cultivated trust and built formal connections with its most active volunteers, and discuss the ethical implications of formal collaborations between for-profit platforms and volunteer users.

PDF | Journal of Online Trust & Safety


2021


Abstract

Volunteer content moderators are essential to the social media ecosystem through the roles they play in managing and supporting online social spaces. Recent work has described moderation primarily as a functional process of actions that moderators take, such as making rules, removing content, and banning users. However, the nuanced ways in which volunteer moderators envision their roles within their communities remain understudied. Informed by insights gained from 79 interviews with volunteer moderators from three platforms, we present a conceptual map of the territory of social roles in volunteer moderation, which identifies five categories with 22 metaphorical variants that reveal moderators’ implicit values and the heuristics that help them make decisions. These metaphors more clearly enunciate the roles volunteer moderators play in the broader social media content moderation apparatus, and can drive purposeful engagement with volunteer moderators to better support the ways they guide and shape their communities.

PDF | SAGE Publications


Abstract

Online chat functions as a discussion channel for diverse social issues. However, deliberative discussion and consensus-reaching can be difficult in online chats in part because of the lack of structure. To explore the feasibility of a conversational agent that enables deliberative discussion, we designed and developed DebateBot, a chatbot that structures discussion and encourages reticent participants to contribute. We conducted a 2 (discussion structure: unstructured vs. structured) × 2 (discussant facilitation: unfacilitated vs. facilitated) between-subjects experiment (N = 64, 12 groups). Our findings are as follows: (1) Structured discussion positively affects discussion quality by generating diverse opinions within a group and resulting in a high level of perceived deliberative quality. (2) Facilitation drives a high level of opinion alignment between group consensus and independent individual opinions, resulting in authentic consensus reaching. Facilitation also drives more even contribution and a higher level of task cohesion and communication fairness. Our results suggest that a chatbot agent could partially substitute for a human moderator in deliberative discussions.

PDF | ACM DL

2020


Abstract

Research in online content moderation has a long history of exploring different forms that moderation can take, including both user-driven moderation models on community-based platforms like Wikipedia, Facebook Groups, and Reddit, and centralized corporate moderation models on platforms like Twitter and Instagram. In this work I review different approaches to moderation research with the goal of providing a roadmap for researchers studying community self-moderation. I contrast community-based moderation research with platforms and policies-focused moderation research, and argue that the former has an important role to play in shaping discussions about the future of online moderation. I provide six guiding questions for future research that, if answered, can support the development of a form of user-driven moderation that is widely implementable across a variety of social spaces online, offering an alternative to the corporate moderation models that dominate public debate and discussion.

PDF | ACM DL | List of Community Self Governance Literature (updated Sept 2020)


Abstract

While the majority of research in chatbot design has focused on creating chatbots that engage with users one-on-one, less work has focused on the design of conversational agents for online communities. In this paper we present results from a three week test of a social chatbot in an established online community. During this study, the chatbot ``grew up'' from ``birth'' through its teenage years, engaging with community members and ``learning'' vocabulary from their conversations. We discuss the design of this chatbot, how users' interactions with it evolved over the course of the study, and how it impacted the community as a whole. We focus in depth on how we addressed challenges in developing a chatbot whose vocabulary could be shaped by users. We conclude with implications for the role of machine learning in social interactions in online communities and potential future directions for design of community-based chatbots.

PDF | ACM DL


Abstract

In the course of every member's integration into an online community, a decision must be made to participate for the first time. The challenges of effective recruitment, management, and retention of new users have been extensively explored in social computing research. However, little work has looked at in-the-moment factors that lead users to decide to participate instead of ``lurk'', conditions which can be shaped to draw new users in at crucial moments. In this work we analyze 183 million messages scraped from chatrooms on the livestreaming platform Twitch in order to understand differences between first-time participants' and regulars' behaviors and to identify conditions that encourage first-time participation. We find that presence of diverse types of users increases likelihood of new participation, with effects depending on the size of the community. We also find that information-seeking behaviors in first-time participation are negatively associated with retention in the sort and medium term.

PDF | ACM DL


Abstract

Care in communities has a powerful influence on potentially disruptive social encounters. Practising care in moderation means exposing a group’s core values, which, in turn, has the potential to strengthen identity and relationships in communities. Dissent is as inevitable in online communities as it is in their offline counterparts. However, dissent can be productive by sparking discussions that drive the evolution of community norms and boundaries, and there is value in understanding the role of moderation in this process. Our work draws on an exploratory analysis of moderation practices in the MetaFilter community, focusing on cases of intervention and response. We identify and analyse MetaFilter moderation with the metaphor: “taking care of a fruit tree”, which is quoted from an interview with moderators on MetaFilter. We address the relevance of care as it is evidenced in these MetaFilter exchanges, and discuss what it might mean to approach an analysis of online moderation practices with a focus on nurturing care. We consider how HCI researchers might make use of care-as-nurture as a frame to identify multi-faceted and nuanced concepts characterising dissent and to develop tools for the sustainable support of online communities and their moderators.

PDF | ACM DL


Abstract

This work investigates how social agents can be designed to create a sense of ownership over them within a group of users. Social agents, such as conversational agents and chatbots, currently interact with people in impersonal, isolated, and often one-on-one interactions: one user and one agent. This is likely to change as agents become more socially sophisticated and integrated in social fabrics. Previous research has indicated that understanding who owns an agent can assist in creating expectations and understanding who an agent is accountable to within a group. We present findings from a three week case-study in which we implemented a chatbot that was successful in creating a sense of collective ownership within a community. We discuss the design choices that led to this outcome and implications for social agent design.

PDF | ACM DL link pending

2019


Abstract

A wide variety of design strategies, tools, and processes are used across the game industry. Prior work has shown that these processes are often collaborative, with experts in different domains contributing to different parts of the whole. However, the ways in which these professionals give and receive peer feedback have not yet been studied in depth. In this paper we present results from interviews with industry professionals at two game studios, describing the ways they give feedback. We propose a new, six step process that describes the full feedback cycle from making plans to receive feedback to reflecting and acting upon that feedback. This process serves as a starting point for researchers studying peer feedback in games, and allows for comparison of processes across different types of studios. It will also help studios formalize their understanding of their own processes and consider alternative processes that might better fit their needs.

PDF | ACM DL


Abstract

The rise of game streaming services has driven a complementary increase in research on such platforms. As this new area takes shape, there is a need to understand the approaches being used in the space, and how common practices can be shared and replicated between researchers with different disciplinary backgrounds. In this paper, we describe a formal literature review of game streaming research. Papers were coded for their research focus, primary method, and type of data collected. Across the prior work we found three common themes: (1) work that is readily supported by existing technical infrastructure, (2) work that does not require explicit technical support, (3) and work that would benefit from further technical development. By identifying these needs in the literature, we take the first step toward developing a research toolkit for game streaming platforms that can unify the breadth of methods being applied in the space.

PDF | ACM DL


Abstract

Large-scale streaming platforms such as Twitch are becoming increasingly popular, but detailed audience-streamer interaction dynamics remain unexplored at scale. In this paper, we perform a mixed methods study on a dataset with over 12 million audience chat messages and 45 hours of streamed video to understand audience participation and streamer performance on Twitch. We uncover five types of streams based on size and audience participation styles, from small streams with close streamer-audience interactions to massive streams with the stadium-style audiences. We discuss challenges and opportunities emerging for streamers and audiences from each style and conclude by providing data-backed design implications that empower streamers, audiences, live streaming platforms, and game designers.

PDF | ACM DL


Abstract

Chatbots have grown as a space for research and development in recent years due both to the realization of their commercial potential and to advancements in language processing that have facilitated more natural conversations. However, nearly all chatbots to date have been designed for dyadic, one-on-one communication with users. In this paper we present a comprehensive review of research on chatbots supplemented by a review of commercial and independent chatbots. We argue that chatbots’ social roles and conversational capabilities beyond dyadic interactions have been underexplored, and that expansion into this design space could support richer social interactions in online communities and help address the longstanding challenges of maintaining, moderating, and growing these communities. In order to identify opportunities beyond dyadic interactions, we used research-through-design methods to generate more than 400 concepts for new social chatbots, and we present seven categories that emerged from analysis of these ideas.

PDF | ACM DL


Abstract

Ensuring high-quality, civil social interactions remains a vexing challenge in many online spaces. In the present work, we introduce a novel approach to address this problem: using psychologically “embedded” CAPTCHAs containing stimuli intended to prime positive emotions and mindsets. An exploratory randomized experiment (N = 454 Mechanical Turk workers) tested the impact of eight new CAPTCHA designs implemented on a simulated, politically charged comment thread. Results revealed that the two interventions that were the most successful at activating positive affect also significantly increased the positivity of tone and analytical complexity of argumentation in participants’ responses. A focused follow-up experiment (N = 120 Mechanical Turk workers) revealed that exposure to CAPTCHAs featuring image sets previously validated to evoke low-arousal positive emotions significantly increased the positivity of sentiment and the levels of complexity and social connectedness in participants’ posts. We offer several explanations for these results and discuss the practical and ethical implications of designing interfaces to influence discourse in online forums.

PDF | ACM DL


Abstract

People with health concerns go to online health support groups to obtain help and advice. To do so, they frequently disclose personal details, many times in public. Although research in non-health settings suggests that people self-disclose less in public than in private, this pattern may not apply to health support groups where people want to get relevant help. Our work examines how the use of private and public channels influences members’ self-disclosure in an online cancer support group, and how channels moderate the influence of self-disclosure on reciprocity and receiving support. By automatically measuring people’s self-disclosure at scale, we found that members of cancer support groups revealed more negative self-disclosure in the public channels compared to the private channels. Although one’s self-disclosure leads others to self-disclose and to provide support, these effects were generally stronger in the private channel. These channel effects probably occur because the public channels are the primary venue for support exchange, while the private channels are mainly used for follow-up conversations. We discuss theoretical and practical implications of our work.

PDF | ACM DL


Abstract

Online communities provide a forum for rich social interaction and identity development for billions of internet users worldwide. In order to manage these communities, platform owners have increasingly turned to commercial content moderation, which includes both the use of moderation algorithms and the employment of professional moderators, rather than user-driven moderation, to detect and respond to anti-normative behaviors such as harassment and spread of offensive content. We present findings from semi-structured interviews with 56 volunteer moderators of online communities across three platforms (Twitch, Reddit, and Facebook), from which we derived a generalized model categorizing the ways moderators engage with their communities and explaining how these communities develop as a result. This model contains three processes: being and becoming a moderator; moderation tasks, actions, and responses; and rules and community development. In this work, we describe how moderators contribute to the development of meaningful communities, both with and without algorithmic support.

PDF | SAGE Publications


2018


Abstract

Bots, or programs designed to engage in social spaces and perform automated tasks, are typically understood as automated tools or as social "chatbots." In this paper, we consider bots’ place alongside users within diverse communities in the emerging social ecosystem of audience participation platforms, guided by concepts from structural role theory. We perform a large-scale analysis of bot activity levels on Twitch, finding that they communicate at a much greater rate than other types of users.We build on prior literature on bot functionalities to identify the roles bots play on Twitch, how these roles vary across different types of Twitch communities, and how users engage with them and vice versa. We conclude with a discussion of where opportunities lie to re-conceptualize and re-design bots as social actors who help communities grow and evolve.

PDF | ACM DL


Abstract

Research in computer-supported cooperative work has historically focused on behaviors of individuals at scale, using frames of interpersonal interaction such as Goffman’s theories of self-presentation. These frames prioritize research detailing the characteristics, personal identities, and behaviors of large numbers of interacting individuals, while the social identity concepts that lead to intra- and inter-group dynamics have received far less attention. We argue that the emergent properties of self-categorization and social identity, which are particularly fluid and complex in online spaces, provide a complementary perspective with which to re-examine traditional topics in social computing. We discuss the applicability of the Social Identity Perspective to both established and new research domains in CSCW, proposing alternative perspectives on selfpresentation, social support, collaboration, misbehavior, and leadership. We propose a set of methodological considerations derived from this body of theories and accompanying empirical work. We close by considering how broad concepts and lessons from social identity provide a valuable lens for inspiring future work in CSCW.

PDF | ACM DL


Abstract

Livestreamed APGs (audience participation games) allow stream viewers to participate meaningfully in a streamer’s gameplay. However, streaming interfaces are not designed to meet the needs of audience participants. In order to explore the game design space of APGs, we provided three game development teams with an audience participation interface development toolkit. Teams iteratively developed and tested APGs over the course of ten months, and then reflected on common design challenges across the three games. Six challenges were identified: latency, screen sharing, attention management, player agency, audience-streamer relationships, and shifting schedules. The impact of these challenges on players were then explored through external playtests. We conclude with implications for the future of APG design.

PDF | ACM DL


2017


Abstract

In this paper we explore audience participation games, a type of game that draws spectators into a role where they can impact gameplay in meaningful ways. "To better understand this design space, we developed several versions of two prototype games as design probes. We livestreamed them to an online audience in order to develop a framework for audience motivations and participation styles, to explore ways in which mechanics can affect audience members’ sense of agency, and to identify promising design spaces. Our results show the breadth of opportunities and challenges that designers face in creating engaging Audience Participation Games.

PDF | ACM DL


Abstract

Online communities have the potential to be supportive, cruel, or anywhere in between. The development of positive norms for interaction can help users build bonds, grow, and learn. Using millions of messages sent in Twitch chatrooms, we explore the effectiveness of methods for encouraging and discouraging specific behaviors, including taking advantage of imitation effects through setting positive examples and using moderation tools to discourage antisocial behaviors. Consistent with aspects of imitation theory and deterrence theory, users imitated examples of behavior that they saw, and more so for behaviors from high status users. Proactive moderation tools, such as chat modes which restricted the ability to post certain content, proved effective at discouraging spam behaviors, while reactive bans were able to discourage a wider variety of behaviors. This work considers the intersection of tools, authority, and types of behaviors, offering a new frame through which to consider the development of moderation strategies.

PDF | ACM DL

2015


From Overview

Student learning outcomes have long been established as an important component in the process of developing subject content, communicating expectations to students, and designing effective assessments. This project focused on mapping the relationships among outcomes across the undergraduate curriculum in the Department of Aeronautics and Astronautics at MIT. Through this project, we expanded upon existing sets of outcomes and created new sets where none previously existed to connect subjects in the undergraduate curriculum in an integrated framework.

PDF