
TikTok FairLens
Client: Carnegie Mellon University, CARE (Collective AI Research and Evaluation) Lab
Wouldn't it be cool if your feed became a mirror and a window into others’ lives?
ROLE
UX Researcher
Product Designer
TOOLS
Figma
Notion
Miro
Google Workspace
DURATION
5 months
IMPACT
Designed a fairer feed to empower marginalized voices and reduce algorithmic bias
Why read a case study when you can interact with one?
Use Fullscreen mode for the best experience.
CONTEXT
Generative AI filters on social media platforms like TikTok are increasingly popular. However, these filters often produce biased results, leading to user disappointment and confusion as the generated content can reflect unacknowledged societal biases.
RESEARCH
To inform our design decisions and ground our solution in user needs, we conducted qualitative research using think-aloud protocols and affinity clustering. Our goal was to explore user perceptions of generative AI on social media and evaluate potential interventions to increase awareness of AI bias.
How Might We:
increase the users awareness of Generative AI bias on TikTok while ensuring that there are features that have clarity, increase engagement, and remain trustworthy?
Key Insights
From our research, we identified several key insights:
Users want the ability to engage on their own terms—features should be optional, non-intrusive, and seamlessly integrated into existing interaction flows.
Most users are not fully aware of generative AI bias, which presents both a challenge and an opportunity: awareness must be built gradually and thoughtfully.
Social proof and expert perspectives can increase credibility and encourage broader participation in reporting and discussing bias.
These findings shaped the direction of our final design, which centers on low-friction interactions, community-powered insights, and trust-building mechanisms to raise awareness of generative AI bias in a way that feels native to the TikTok experience.
DESIGN DECISIONS
Testing Our Initial Designs & Narrowing Our Focus
Feedback on our early prototypes revealed several areas for improvement and refinement:
"The [bias reporting] button design didn’t catch my eye, so I think it could affect engagement."
"It would be nice to see a reason for people saying it is or is not biased."
“Users might misuse the reporting feature for content they dislike.”
“Listing the amount of people that voted would lead to more belief in the validity of the results.”
REFLECTIONS
AI can be used for good
Research is a vital part to the design process
Personalization is important, but community is powerful