

OVERVIEW
Letterboxd is a beloved platform for film lovers to track what they’ve watched, discover new favorites, and follow friends’ reviews and activity. But, despite its loyal fanbase, its current navigation system is inconsistent, overloaded, and confusing; especially for newer users.
My redesign focused on reorganizing the site’s information architecture to make key features easier to find and improve the overall usability and clarity of the platform.
THE CHALLENGE
Letterboxd’s current navigation system makes simple tasks more complicated than they should be. Here’s what I found:
Important items are hidden in drop-down menus and the footer
Unused or unclear labels occupy top-level navigation
Inconsistent terms and duplicate labels (like “Journal” vs “Diary”) cause confusion
Drop-down menus are cluttered, making it hard to scan and navigate
THE SOLUTION
Through iterative research and testing, I reorganized the site’s navigation based on user behavior and task frequency. My redesigned introduced new categories, relocated less-used items, and clarified confusing labels.


New navigation categories: Browse, My Films, Help, and Contact
Relocated utility items (like Profile and Settings) into a separate nav
Clearer labeling: “Log a Film” replaces vague icons and text
Grouped user-related content under “My Films” for personalized control
Consolidated duplicated items and removed redundant ones
CONTENT INVENTORY
I began with a content inventory to evaluate existing labels, drop-downs, and footers. Key takeaways:
Most important content was buried under the ambiguous “username” drop-down
Multiple labels (Like Journal/News and Diary) were used inconsistently
Footer content help essential pages like Help and Contact
PROPOSED SITEMAP
Based on the content inventory, I revised the initial sitemap to streamline navigation and support clearer task flows. These updates aim to reduce cognitive load, prioritize user support, and enhance discoverability. The proposed structure reflects a more intuitive hierarchy that aligns with mental models and real-world needs.

Promoting “Help” and “Contact” to top-level nav items
Consolidating user-specific items under one umbrella
Creating clearer task pathways for task completion
CARD SORTS
To ensure the app's information architecture aligned with users' mental models, I conducted multiple rounds of card sorting. These sessions helped identify intuitive groupings, surface navigation pain points, and refine category labels. Through iterative testing, I achieved a stronger consensus on content organization and clarified the structure for both top-level and utility navigation.
16 cards / 4 categories / 7 participants
Only 3 cards had over 80% placement agreement
Feedback: categories too vague; lacked structure for lower-level nav
15 cards / 4 categories / 10 participants
6 cards had over 80% agreement; clearer patterns emerged
Feedback: some items better suited for utility nav (i.e., Profile, Settings)
12 cards / 4 categories / 15 participants
7 cards reached 80%+ placement agreement
Solved confusion by grouping “Diary” and “Tags” under “Watch History”
TREE TESTS
To further evaluate the effectiveness of the proposed site structure, I conducted multiple rounds of tree testing focused on key tasks. These tests revealed how well users could navigate the hierarchy without visual cues, surfacing terminology issues and misclassified items. Iterative refinements significantly improved success rates, validating key adjustments to the sitemap and category labels.
5 tasks / 7 participants
Success varied wildly: 14% - 86%
Problems: included utility nav items, tasks not aligned with site structure
5 tasks (revised) / 10 participants
Success: 60% - 100%
Fixes: clarified unclear terms like “Diary”, reorganized ambiguous categories
5 tasks / 15 participants
Success: 73% - 100%
Biggest win: Moving “Watchlist” and “Contact” into clearer categories led to major improvements
FINAL SITEMAP
Based on the results from the card sorts, tree tests, and first-click test; I restructured the sitemap to align with user tasks and mental models. Changes like renaming “Journal” to “News” and moving utility nav items helped declutter the interface and improve readability.

FINAL SCREENS
REFLECTION & FUTURE WORK
This was my first time conducting multiple rounds of user testing in such a short time frame, and while it was overwhelming at times, it was incredibly valuable
Iterative testing helped me see how even small changes, like changing a label or reorganizing a category, can have a big impact
I realized the important of trying ideas even if they don’t work out; failed attempts still provided helpful insights
Seeing the evolution from the pilot tests to the final versions reinforced how essential testing is for refining information architecture
Looking ahead, I plan to structure future testing phases more strategically to improve clarity and data quality. This includes separating card sorting and tree testing into distinct rounds to isolate insights and reduce participant fatigue. I'll also invest more time into organizing pilot tests to ensure they're focused and effective. For greater validation, I'd run additional rounds of first-click testing, starting with a pilot test, and define consistent task scenarios early to enable meaningful comparisons across studies. Lastly, establishing a recruitment strategy at the outset will help secure enough quality participants, especially when working solo.