Jordan Gozinsky
Spot the Bot
By taking on monthly challenges, testing their detection skills, and learning from each interaction, Spot the Bot users sharpen their ability to identify AI-generated images in a fun, informative, and engaging environment.

Overview
Spot the Bot is a mobile app that helps users sharpen their ability to identify AI generated images through interactive challenges, educational content, and gamified progression. Whether someone is curious about visual media, concerned about misinformation, or simply wants to test their instincts, the app offers an engaging way to develop visual literacy in the age of generative AI.
Unlike passive learning tools, Spot the Bot emphasizes active participation. Users guess whether images are real or AI generated, receive instant feedback with detailed explanations, and can dive deeper through curated videos and articles. Progress is tracked through challenges and badges, making learning both informative and rewarding.
Goal
The primary goal of Spot the Bot is to help users improve their ability to recognize AI generated images by offering an engaging, educational experience. The app guides users through image guessing challenges where they receive instant feedback and detailed explanations, allowing them to better understand what makes an image real or artificial. It also includes a learning section with videos and articles that build foundational skills in spotting visual clues. Overall, the app aims to make learning both accessible and rewarding by combining interactive gameplay with meaningful educational content.
Role
-
UX Research
-
UX Design
-
Usability Testing
Tools
-
Figma
Team
-
1 UX designer
Timeline
-
Overall: 2 weeks
-
Research: 4 days
-
Design & Testing: 1.5 weeks
Research
-
Competitor Analysis: Conducted a competitor analysis to identify strengths and gaps in similar AI detection tools. Observed that many platforms focused heavily on technical detection without user education or engagement. Noted low retention due to limited interactivity, lack of feedback on incorrect guesses, and minimal gamification. These insights guided the decision to prioritize a user-centered, educational, and rewarding experience in the design
-
User Interviews: Conducted user interviews to understand behaviors, needs, and challenges related to identifying AI-generated images. Participants expressed uncertainty when making guesses and a desire for immediate feedback and tips. Many found existing tools too technical or uninspiring. Insights from these interviews shaped core features of the app, including the educational feedback system, gamification elements, and an intuitive guessing interface
-
Accessibility: Research on accessibility was conducted to understand how to create an inclusive design that meets the needs of users with diverse abilities, including those with disabilities. The research involved identifying common barriers to access within similar digital products and gathering insights through user interviews and usability testing. Additionally, accessibility guidelines were reviewed to inform design decisions. These findings helped shape solutions that improve usability and ensure the product is welcoming and functional for all users
Sketches
The design process began with hand drawn sketches, which played a crucial role in guiding the subsequent stages of development. These initial visuals were created specifically for the app’s expected high-traffic pages and provided a clear foundation for all future design decisions. The sketches were heavily influenced by insights gathered from competitor analysis and user interviews conducted during the research phase. Throughout the design process, these sketches were continuously referenced to ensure that the final product remained aligned with user needs and project goals.




Low-Fidelity Wireframes
Using Figma, I translated the initial hand-drawn sketches into low-fidelity wireframes to begin visualizing the app’s structure and user flow more clearly. These wireframes focused on layout, functionality, and navigation without detailed visual design elements, allowing for quick adjustments based on feedback. The wireframes then served as the foundation for user testing, where I conducted two rounds of sessions to gather valuable insights on usability and overall experience. The feedback collected during testing informed important refinements, which were incorporated into the design as I progressed to creating polished high-fidelity mockups that combined both visual style and interactive elements.

High-Fidelity Mockups
In Figma, the low-fidelity wireframes were developed into high-fidelity mockups, adding detailed visual elements such as color schemes, typography, and refined layouts to bring the design closer to a realistic user experience. During user testing sessions, participants provided valuable feedback that led to numerous updates, including an easier-to-read leaderboard layout to improve clarity and user engagement. These updates enhanced both usability and the overall visual appeal of the app. After finalizing the high-fidelity mockups, two rounds of usability testing were conducted, allowing for further refinements based on user behavior and feedback, ensuring the final design was both intuitive and engaging.

Usability Testing
Following the completion of the high-fidelity mockups, usability testing was conducted to evaluate the effectiveness of the design and overall user experience. The first round of testing, carried out over two days, involved observing two users as they interacted with the mockups. Feedback was gathered through observation and follow-up questions, revealing specific opportunities for improvement. Based on these insights, targeted design refinements were made. A second round of testing was then conducted with two additional users to assess the effectiveness of the updates and confirm that previous issues had been addressed. This iterative approach ensured that the final design was intuitive, user-friendly, and aligned with user expectations.
Issue: Pop-up and page boundary confusion
Findings: A significant issue identified during usability testing was the lack of visual clarity distinguishing pop-ups from the main page. Users encountered difficulty determining where the pop-up ended and the underlying page began, which led to confusion about what was interactive and what was not. In the first round of testing, both participants attempted to engage with elements on the background, mistaking them for part of the pop-up. This ambiguity disrupted the flow of interaction and caused frustration, highlighting the need for stronger visual separation between interface layers.
Solution: Following the final round of usability testing, a redesign was implemented to address the confusion surrounding the boundaries of pop-up elements. To create clearer visual separation between the pop-up and the background content, two design solutions were implemented: one involved increasing the thickness of the pop-up’s border, while the other applied a blur to the background in order to de-emphasize the page content behind it. This updated design was included in the second round of testing, where users were able to easily distinguish the pop-up from the rest of the page, resolving the previous issue.
