
TikTok FairLens
Client: Carnegie Mellon University, CARE (Collective AI Research and Evaluation) Lab
Wouldn't it be cool if your feed became a mirror and a window into others’ lives?
ROLE
UX Researcher
Product Designer
TOOLS
Figma
Notion
Miro
Google G-Suite
DURATION
5 months
IMPACT
Designed a fairer feed that empowers marginalized voices and reduces algorithmic bias
Why read a case study when you can interact with one?
Use Fullscreen mode for the best experience.
CONTEXT
With the rise of generative AI tools across social media platforms, TikTok users are increasingly engaging with filters that subtly influence how they perceive themselves and others. While these filters are often marketed as fun or artistic, many users report feeling disappointed or confused by the results, especially when the generated output reflects unacknowledged biases.
To address this, we designed a bias reporting feature integrated into TikTok’s existing UI. By surfacing community-driven insights and encouraging engagement around AI-generated content, the solution aims to empower users to better understand how algorithmic design can reinforce bias—and to advocate for more transparent, inclusive AI experiences.
RESEARCH
To inform our design decisions and ground our solution in user needs, we conducted qualitative research using think-aloud protocols and affinity clustering. Our goal was to explore user perceptions of generative AI on social media and evaluate potential interventions to increase awareness of AI bias.
How Might We:
increase the users awareness of Generative AI bias on TikTok while ensuring that there are features that have clarity, increase engagement, and remain trustworthy?
Methods
We began by conducting user interviews, encouraging participants to verbalize their thoughts and reactions in real time using a think-aloud protocol. This approach allowed us to understand how users process information about generative AI, what concerns they have, and how they might interact with a bias reporting feature.
After the interviews, we synthesized the data using affinity clustering, clustering user quotes and behavioral insights into themes. This helped us identify shared mental models, pain points, and expectations around AI-generated content and social media interaction patterns.
Evidence
Background Research – Understanding the Problem
Participants expressed growing concern about the authenticity of AI-generated content and the potential erosion of trust:
“I feel that AI-generated images in advertising creates a consumer trust problem.”
“It's crucial for me to know if the image is AI-generated.”
They’re “worried about the increasing presence of AI-generated content.”
DESIGN DECISIONS
Testing Our Initial Designs & Narrowing Our Focus
Feedback on our early prototypes revealed several areas for improvement and refinement:
"The [bias reporting] button design didn’t catch my eye, so I think it could affect engagement."
"It would be nice to see a reason for people saying it is or is not biased."
“Users might misuse the reporting feature for content they dislike.”
“Listing the amount of people that voted would lead to more belief in the validity of the results.”
Insights & Implications
From our research, we identified several key insights:
Users want the ability to engage on their own terms—features should be optional, non-intrusive, and seamlessly integrated into existing interaction flows.
Most users are not fully aware of generative AI bias, which presents both a challenge and an opportunity: awareness must be built gradually and thoughtfully.
Social proof and expert perspectives can increase credibility and encourage broader participation in reporting and discussing bias.
These findings shaped the direction of our final design, which centers on low-friction interactions, community-powered insights, and trust-building mechanisms to raise awareness of generative AI bias in a way that feels native to the TikTok experience.
REFLECTIONS
AI can be used for good
Research is a vital part to the design process
Personalization is important, but community is powerful