Letterboxd Redesign

Letterboxd Redesign

Improving navigation for a smoother, smarter experience

Improving navigation for a smoother, smarter experience

Overview

Overview: A redesign of the Letterboxd website to improve usability and key tasks.

Overview

A redesign of the Letterboxd website to improve usability and key tasks.

Timeline

Timeline: April - June 2024 (10 Weeks)

Timeline

April - June 2024 (10 Weeks)

Roles

Roles: Information Architecture, Usability Testing, & Design

Roles

Information Architecture, Usability Testing, & Design

Tools

Tools: Figma, Optimal Workshop, Canva

Tools

Figma, Optimal Workshop, Canva

OVERVIEW

Making Movie Discovery Easier for Everyone

Making Movie Discovery Easier for Everyone

Letterboxd is a beloved platform for film lovers to track what they’ve watched, discover new favorites, and follow friends’ reviews and activity. But, despite its loyal fanbase, its current navigation system is inconsistent, overloaded, and confusing; especially for newer users.

My redesign focused on reorganizing the site’s information architecture to make key features easier to find and improve the overall usability and clarity of the platform.

THE CHALLENGE

Letterboxd’s navigation is disorganized, inconsistent, & confusing

Letterboxd’s navigation is disorganized, inconsistent, & confusing

Letterboxd’s current navigation system makes simple tasks more complicated than they should be. Here’s what I found:

  • Important items are hidden in drop-down menus and the footer

  • Unused or unclear labels occupy top-level navigation

  • Inconsistent terms and duplicate labels (like “Journal” vs “Diary”) cause confusion

  • Drop-down menus are cluttered, making it hard to scan and navigate

THE SOLUTION

A cleaner, clearer navigation system

A cleaner, clearer navigation system

Through iterative research and testing, I reorganized the site’s navigation based on user behavior and task frequency. My redesigned introduced new categories, relocated less-used items, and clarified confusing labels.

Major improvements include:

Major improvements include:
  • New navigation categories: Browse, My Films, Help, and Contact

  • Relocated utility items (like Profile and Settings) into a separate nav

  • Clearer labeling: “Log a Film” replaces vague icons and text

  • Grouped user-related content under “My Films” for personalized control

  • Consolidated duplicated items and removed redundant ones

CONTENT INVENTORY

What’s here, and where is it?

What’s here, and where is it?

I began with a content inventory to evaluate existing labels, drop-downs, and footers. Key takeaways:

  • Most important content was buried under the ambiguous “username” drop-down

  • Multiple labels (Like Journal/News and Diary) were used inconsistently

  • Footer content help essential pages like Help and Contact

PROPOSED SITEMAP

Initial Sitemap Changes

Initial Sitemap Changes

Based on the content inventory, I revised the initial sitemap to streamline navigation and support clearer task flows. These updates aim to reduce cognitive load, prioritize user support, and enhance discoverability. The proposed structure reflects a more intuitive hierarchy that aligns with mental models and real-world needs.

  • Promoting “Help” and “Contact” to top-level nav items

  • Consolidating user-specific items under one umbrella

  • Creating clearer task pathways for task completion

CARD SORTS

Reorganizing by Mental Models

Reorganizing by Mental Models

To ensure the app's information architecture aligned with users' mental models, I conducted multiple rounds of card sorting. These sessions helped identify intuitive groupings, surface navigation pain points, and refine category labels. Through iterative testing, I achieved a stronger consensus on content organization and clarified the structure for both top-level and utility navigation.

Pilot Test

Pilot Test
  • 16 cards / 4 categories / 7 participants

  • Only 3 cards had over 80% placement agreement

  • Feedback: categories too vague; lacked structure for lower-level nav

Round 1

Round 1
  • 15 cards / 4 categories / 10 participants

  • 6 cards had over 80% agreement; clearer patterns emerged

  • Feedback: some items better suited for utility nav (i.e., Profile, Settings)

Round 2

Round 2
  • 12 cards / 4 categories / 15 participants

  • 7 cards reached 80%+ placement agreement

  • Solved confusion by grouping “Diary” and “Tags” under “Watch History”

TREE TESTS

Can users actually find what they need?

Can users actually find what they need?

To further evaluate the effectiveness of the proposed site structure, I conducted multiple rounds of tree testing focused on key tasks. These tests revealed how well users could navigate the hierarchy without visual cues, surfacing terminology issues and misclassified items. Iterative refinements significantly improved success rates, validating key adjustments to the sitemap and category labels.

Pilot Test

Pilot Test
  • 5 tasks / 7 participants

  • Success varied wildly: 14% - 86%

  • Problems: included utility nav items, tasks not aligned with site structure

Round 1

Round 1
  • 5 tasks (revised) / 10 participants

  • Success: 60% - 100%

  • Fixes: clarified unclear terms like “Diary”, reorganized ambiguous categories

Round 2

Round 2
  • 5 tasks / 15 participants

  • Success: 73% - 100%

  • Biggest win: Moving “Watchlist” and “Contact” into clearer categories led to major improvements

FIRST-CLICK TEST

Is the interface doing its job?

I tested the clarity and usability of two primary actions:

Task 1: Log a Film
  • Result: 10/15 clicked the correct button

  • Issue: “Log” was vague; some clicked the search bar or friend activity

  • Recommendations: Relabel the “log” button so its function is more clear and consistent

Task 2: Add to Watchlist
  • Result: 13/15 success

  • Issues: Similar-sounding buttons (“watch” vs “watchlist”) caused confusion

  • Recommendations: Make labels more distinct, ensure clearer button placement

FIRST-CLICK TEST

Is the interface doing its job?

To assess the intuitiveness of key interactions, I ran a first-click test focused on two core tasks: logging a film and adding a film to the watchlist. This method measured whether users could identify the correct action on their first try, which is an important indicator of interface clarity. The results highlighted opportunities to improve labeling and button placement to reduce ambiguity and streamline user flows.

Task 1: Log a Film

  • Result: 10/15 clicked the correct button

  • Issue: “Log” was vague; some clicked the search bar or friend activity

  • Recommendations: Relabel the “log” button so its function is more clear and consistent

Task 2: Add to Watchlist

  • Result: 13/15 success

  • Issues: Similar-sounding buttons (“watch” vs “watchlist”) caused confusion

  • Recommendations: Make labels more distinct, ensure clearer button placement

FINAL SITEMAP

Simpler, smarter, streamlined

Simpler, smarter, streamlined

Based on the results from the card sorts, tree tests, and first-click test; I restructured the sitemap to align with user tasks and mental models. Changes like renaming “Journal” to “News” and moving utility nav items helped declutter the interface and improve readability.

FINAL SCREENS

More intuitive navigation for core user flowers

More intuitive navigation for core user flowers
Browsing for Films
  • Important information and actions relocated to the top

  • More concise information and layout = less scrolling required

  • Navigation reflects updated sitemap and hierarchy

Home Page
  • Clearer layout

  • Concise activity feed

  • Color-coded buttons and links for better CTA

Logging a Film
  • More accessible & identifiable buttons

  • Clearer labeling

  • More task context

Home Page

  • Clearer layout

  • Concise activity feed

  • Color-coded buttons and links for better CTA

Logging a Film

  • More accessible & identifiable buttons

  • Clearer labeling

  • More task context

Browsing for Films

  • Important information and actions relocated to the top

  • More concise information and layout = less scrolling required

  • Navigation reflects updated sitemap and hierarchy

REFLECTION & FUTURE WORK

Key Takeaways & Improvements

Key Takeaways & Improvements

What I Learned

What I Learned
  • This was my first time conducting multiple rounds of user testing in such a short time frame, and while it was overwhelming at times, it was incredibly valuable

  • Iterative testing helped me see how even small changes, like changing a label or reorganizing a category, can have a big impact

  • I realized the important of trying ideas even if they don’t work out; failed attempts still provided helpful insights

  • Seeing the evolution from the pilot tests to the final versions reinforced how essential testing is for refining information architecture

What I'd Build On

What I'd Build On

Looking ahead, I plan to structure future testing phases more strategically to improve clarity and data quality. This includes separating card sorting and tree testing into distinct rounds to isolate insights and reduce participant fatigue. I'll also invest more time into organizing pilot tests to ensure they're focused and effective. For greater validation, I'd run additional rounds of first-click testing, starting with a pilot test, and define consistent task scenarios early to enable meaningful comparisons across studies. Lastly, establishing a recruitment strategy at the outset will help secure enough quality participants, especially when working solo.

Back to the top

Kylie Wielkiewicz

2025

Kylie Wielkiewicz

2025

Kylie Wielkiewicz

2025