San Francisco Bay Area

PROBLEM STATEMENT

Roughly 4 of 10 Americans have personally experienced online harassment, from less severe behavior such as offensive name-calling to more severe ones like stalking and sustained harassment.

SOLUTION

A Chrome Browser Extension on Twitter which analyzes the word usage of Twitter profiles, detects potentially abusive behavior and then warns users visually such that this early warning can prevent unhealthy interactions from taking place in the future.

HONORS 

Published in ACM as a reception demo in CSCW 2018 Proceedings

 

MY ROLE

HCI/UX Engineer.

 

SKILLS

System Design, Full Stack Development, Distilling literature research

 

TOOLS

HTML, CSS, JS , Python

 

GITHUB

Feel free to contribute to our code here

MOTIVATION

Online abuse, experienced by roughly 4 in 10 Americans, is a serious issue in social media platforms

Online abuse is a commonplace issue imbued in contemporary social media platforms. Roughly 4 of 10
Americans have personally experienced online harassment, from less severe behavior such as offensive name-calling to more severe ones like stalking and sustained harassment. The subjects of harassment are varied in physical appearance, race, sex, and the outcome of harassment often engenders grave mental or emotional stress and relationship issues in the victims. While a majority of the public believe that social media platforms should take a lead to mitigate the problem, studies found that none of them clearly define the term ‘harassment’ in their service policies.

PROBLEM STATEMENT 

Hard to Identify and Contextualize Bad User Behavior on Twitter
Identifying abusive user behavior on Twitter is particularly requires exhaustive analysis because of its higher chance of being anonymous on the platform and also the slow pace of processing currently available self-reports of harassment. In 2014, Twitter’s CEO confessed that their platform failed to manage these offensive user behaviors and expanded the policy against hate speech and sexual harassment in 2017. However, it is still an ongoing challenge to effectively catch abusers and effectively intervene in the situations. Further, traditional identity metrics (followers, tweet count, etc.) provided by Twitter might not be insufficient for users to detect another user’s behavioral aspects. 

LITERATURE REVIEW 

What are existing works on online harassment on Twitter? 

Online harassment in social media platforms has been considered as a dark side of social computing and many researchers have sought why and how these abusers proliferate and looked into the effectiveness of different methods of mitigating this issue: Crowdsourcing [6], Self-reports [2], blocklists [4, 5], and bot intervention [7]. While these cases generated promising results from their evaluation, these studies have not looked on natural language analysis and its visualization from user-side so users can aware of the characteristic. 

SOLUTION

Tweety Holmes, A Browser Extension for Twitter

Tweety Holmes, a Chrome Browser Extension which analyzes the word usage of Twitter profiles, detects potentially abusive behavior and then warns users visually. It follows principles of algorithmic transparency by visually indicating which words or tweets flagged the profile as abusive so users can better understand the context and also alerts users when they are mentioned or messages by a potentially abusive user with the hopes that this early warning can prevent unhealthy interactions from taking place in the future.

MVP UI DESIGN 

Replacing account’s bio with abusive profile module 

The system replaces the abusive account’s bio and profile with the extension’s profile module, to make the warning more obvious. The module states the abusive status of the account and also a list of used words that contribute to identifying abusiveness.

MVP UI DESIGN 

Highlighting abusive words in tweets 

The system provides evidence-based claim by highlighting the tweets that are related to abusive identification. Since the degree of abusiveness and the nuance of the word can be different, hence, it is important to provide the context of the words used. 

UI DESIGN 

Signaling interaction with abusive users in notifications 

In addition to changing the user’s profile interface, the system also inspects user’s Notification page to preemptively signal potential offensive users. When a user is mentioned or messaged to, the user can immediately recognize if the message is from an abusive user by the system mark. 

MVP SYSTEM DESIGN 

Detection Algorithm

We used combined dataset of two readily-available abusive word dictionaries to determine abusiveness of a word in a tweet. First, Jhaver et al.’s abusive word dictionary based on SAGE analysis distinguished the word usage from the tweets of blocked users versus non-blocked users [5]. Second, Luis von Ahn's offensive or profane word list provided us a list of words that can be categorized as abusive or cursing [1]. A combination of both dictionaries provided a comprehensive, broad range of abusive words. While there is still an opportunity to improve the perceived abusiveness of the words from the users and evaluate the correlation of actual abusiveness of the user and word usage, our combined dictionary provided promising results on detecting words on the dictionaries from internal evaluation.

CURRENT WORK

Building a stable and usable version 

I am currently working towards building a more stable version of the designed system. This includes building a stronger database and using machine learning models that perform word analysis to more accurately predict and identify if a word in a tweet is abusive or not.  

COMING SOON

Pilot User Study 

To evaluate our system, we plan to conduct user feedback sessions in the near future.

< Back to Portfolio 

ROLE

My Contribution

In a team of 5, we made key design decisions and created a MVP of the system from Jan - March 2018. I am currently working with 2 HCI researchers to build a highly stable and usable version of the extension.

Presenting our project at CSCW 2018