Joseph Seering

Below are recent publications related to my current work.

2019


Abstract

Chatbots have grown as a space for research and development in recent years due both to the realization of their commercial potential and to advancements in language processing that have facilitated more natural conversations. However, nearly all chatbots to date have been designed for dyadic, one-on-one communication with users. In this paper we present a comprehensive review of research on chatbots supplemented by a review of commercial and independent chatbots. We argue that chatbots’ social roles and conversational capabilities beyond dyadic interactions have been underexplored, and that expansion into this design space could support richer social interactions in online communities and help address the longstanding challenges of maintaining, moderating, and growing these communities. In order to identify opportunities beyond dyadic interactions, we used research-through-design methods to generate more than 400 concepts for new social chatbots, and we present seven categories that emerged from analysis of these ideas.


Abstract

Ensuring high-quality, civil social interactions remains a vexing challenge in many online spaces. In the present work, we introduce a novel approach to address this problem: using psychologically “embedded” CAPTCHAs containing stimuli intended to prime positive emotions and mindsets. An exploratory randomized experiment (N = 454 Mechanical Turk workers) tested the impact of eight new CAPTCHA designs implemented on a simulated, politically charged comment thread. Results revealed that the two interventions that were the most successful at activating positive affect also significantly increased the positivity of tone and analytical complexity of argumentation in participants’ responses. A focused follow-up experiment (N = 120 Mechanical Turk workers) revealed that exposure to CAPTCHAs featuring image sets previously validated to evoke low-arousal positive emotions significantly increased the positivity of sentiment and the levels of complexity and social connectedness in participants’ posts. We offer several explanations for these results and discuss the practical and ethical implications of designing interfaces to influence discourse in online forums.


Abstract

People with health concerns go to online health support groups to obtain help and advice. To do so, they frequently disclose personal details, many times in public. Although research in non-health settings suggests that people self-disclose less in public than in private, this pattern may not apply to health support groups where people want to get relevant help. Our work examines how the use of private and public channels influences members’ self-disclosure in an online cancer support group, and how channels moderate the influence of self-disclosure on reciprocity and receiving support. By automatically measuring people’s self-disclosure at scale, we found that members of cancer support groups revealed more negative self-disclosure in the public channels compared to the private channels. Although one’s self-disclosure leads others to self-disclose and to provide support, these effects were generally stronger in the private channel. These channel effects probably occur because the public channels are the primary venue for support exchange, while the private channels are mainly used for follow-up conversations. We discuss theoretical and practical implications of our work.


Abstract

Online communities provide a forum for rich social interaction and identity development for billions of internet users worldwide. In order to manage these communities, platform owners have increasingly turned to commercial content moderation, which includes both the use of moderation algorithms and the employment of professional moderators, rather than user-driven moderation, to detect and respond to anti-normative behaviors such as harassment and spread of offensive content. We present findings from semi-structured interviews with 56 volunteer moderators of online communities across three platforms (Twitch, Reddit, and Facebook), from which we derived a generalized model categorizing the ways moderators engage with their communities and explaining how these communities develop as a result. This model contains three processes: being and becoming a moderator; moderation tasks, actions, and responses; and rules and community development. In this work, we describe how moderators contribute to the development of meaningful communities, both with and without algorithmic support.


2018


Abstract

Bots, or programs designed to engage in social spaces and perform automated tasks, are typically understood as automated tools or as social "chatbots." In this paper, we consider bots’ place alongside users within diverse communities in the emerging social ecosystem of audience participation platforms, guided by concepts from structural role theory. We perform a large-scale analysis of bot activity levels on Twitch, finding that they communicate at a much greater rate than other types of users.We build on prior literature on bot functionalities to identify the roles bots play on Twitch, how these roles vary across different types of Twitch communities, and how users engage with them and vice versa. We conclude with a discussion of where opportunities lie to re-conceptualize and re-design bots as social actors who help communities grow and evolve.


Abstract

Research in computer-supported cooperative work has historically focused on behaviors of individuals at scale, using frames of interpersonal interaction such as Goffman’s theories of self-presentation. These frames prioritize research detailing the characteristics, personal identities, and behaviors of large numbers of interacting individuals, while the social identity concepts that lead to intra- and inter-group dynamics have received far less attention. We argue that the emergent properties of self-categorization and social identity, which are particularly fluid and complex in online spaces, provide a complementary perspective with which to re-examine traditional topics in social computing. We discuss the applicability of the Social Identity Perspective to both established and new research domains in CSCW, proposing alternative perspectives on selfpresentation, social support, collaboration, misbehavior, and leadership. We propose a set of methodological considerations derived from this body of theories and accompanying empirical work. We close by considering how broad concepts and lessons from social identity provide a valuable lens for inspiring future work in CSCW.


Abstract

Livestreamed APGs (audience participation games) allow stream viewers to participate meaningfully in a streamer’s gameplay. However, streaming interfaces are not designed to meet the needs of audience participants. In order to explore the game design space of APGs, we provided three game development teams with an audience participation interface development toolkit. Teams iteratively developed and tested APGs over the course of ten months, and then reflected on common design challenges across the three games. Six challenges were identified: latency, screen sharing, attention management, player agency, audience-streamer relationships, and shifting schedules. The impact of these challenges on players were then explored through external playtests. We conclude with implications for the future of APG design.


2017


Abstract

In this paper we explore audience participation games, a type of game that draws spectators into a role where they can impact gameplay in meaningful ways. "To better understand this design space, we developed several versions of two prototype games as design probes. We livestreamed them to an online audience in order to develop a framework for audience motivations and participation styles, to explore ways in which mechanics can affect audience members’ sense of agency, and to identify promising design spaces. Our results show the breadth of opportunities and challenges that designers face in creating engaging Audience Participation Games.


Abstract

Online communities have the potential to be supportive, cruel, or anywhere in between. The development of positive norms for interaction can help users build bonds, grow, and learn. Using millions of messages sent in Twitch chatrooms, we explore the effectiveness of methods for encouraging and discouraging specific behaviors, including taking advantage of imitation effects through setting positive examples and using moderation tools to discourage antisocial behaviors. Consistent with aspects of imitation theory and deterrence theory, users imitated examples of behavior that they saw, and more so for behaviors from high status users. Proactive moderation tools, such as chat modes which restricted the ability to post certain content, proved effective at discouraging spam behaviors, while reactive bans were able to discourage a wider variety of behaviors. This work considers the intersection of tools, authority, and types of behaviors, offering a new frame through which to consider the development of moderation strategies.