Skip to content
ENT208TC Industry Readiness

Validation Guide

User testing answers one question: does your product actually work for the people you built it for? This guide gives you a simple recipe to follow, then explains each step in more detail below.



This is the core loop. Follow it every time you test. Aim for at least 5 sessions total across Weeks 5–8.

They should match your target user. A classmate who would genuinely use your product is fine. Someone from outside your team is better.

You need at least 5 participants in total for the Validation Report. Book them early β€” do not leave all sessions to Week 8.

A task asks them to do something, not say something.

Good taskAvoid
”Imagine you want to split a dinner bill. Show me what you would do.""Do you like our app?"
"Find where to add a new expense.""Is this design clear?"
"Check whether your friend paid you back.""Would you use this?”

Write the tasks down before the session. Three tasks is enough.

Opening (2 min):

β€œThanks for helping. We’re testing our design β€” not testing you. There are no wrong answers. Please think out loud as you go.”

During (15 min):

  • Give them one task at a time
  • Do not help them β€” if they get stuck, that is the data
  • Write down: what they try first, where they pause, what they say, what they skip

After (3 min):

β€œWas anything confusing? What did you expect to happen when [thing that failed]? Would you actually use this?”

Step 4 β€” Write up what you found (10 minutes after the session)

Section titled β€œStep 4 β€” Write up what you found (10 minutes after the session)”

Write one short paragraph per session. Include:

  • What they struggled with (specific, not vague)
  • One direct quote if you have one
  • One thing that surprised you

Store it in your team’s shared folder with a date and participant number (not name).

Pick one thing they struggled with. Change it. Write in your Dev Log:

  • What you found (participant quote or observation)
  • What you changed
  • Why that change should help

That is one complete iteration. Repeat.


The Validation Report is 10% of your module grade. Here is what moves your score up or down:

What you doLower scoreHigher score
ParticipantsFewer than 5 total5+ with varied backgrounds
Evidence quality”Users liked it” / β€œsome had issues”Direct quotes and specific observations
Iteration”We improved the design""P3 couldn’t find delete β†’ we moved it β†’ P5 found it immediately”
Before/afterChanges described without evidenceScreenshots or notes showing the change
Documentation timingWritten at the end from memoryNotes taken during or right after each session
Appendix linksMissing or brokenWorking links to transcripts, notes, or recordings

The most common reason for a low score: building for weeks, testing once at the end, and writing it up from memory. Test early. Document as you go.


The module assesses process over technology. Your Validation Report is built entirely from the evidence you collect here. Teams who test early and often, and who change their product based on what they learn, score significantly higher than teams who build for weeks and test once at the end.

The core mindset shift:

Before: β€œWe think users will want this feature.” After: β€œWe tested with 5 users and found that 3 of them couldn’t find the feature. We moved it to the main screen and tested again.”


The type of testing changes as your product matures.

WeekWhat to testWhat you are finding out
3–4The problem (interviews)Does the problem actually exist for real people?
5The concept (sketch or Figma)Do people understand what you are building?
5–6Early prototype (partial working feature)Can users complete the core task?
7–8Working productWhere do users still get stuck before Demo Day?

Purpose: Confirm that the problem you defined in your Project Brief is real β€” before you spend weeks building a solution.

If your team completed any validation in Week 3 (user conversations, survey reuse, competitive benchmarking, secondary research), your Stage 1 is done. Document it in your Week 3 Dev Log entry and move to Stage 2.

If you skipped validation in Week 3, do it now. Talk to 3–5 people who match your target user. You are not showing them a product β€” you are checking whether they genuinely experience the problem.

Good interview questions are open (not yes/no) and backward-looking (about real past experiences):

GoodAvoid
”Tell me about the last time you experienced [the problem].""Would you use an app that solved this?"
"What do you currently do when [the situation] happens?""Do you think this is a common problem?"
"Walk me through what happened β€” what did you do first?""Do you like this idea?”

People are unreliable predictors of their future behaviour. They are much more reliable reporters of past experiences.

Other options if interviews are hard to arrange:

  • Competitive benchmark β€” find 3–5 existing products. Read their reviews. What do users consistently complain about? That gap is your opportunity.
  • Secondary research β€” find evidence the problem exists: forum posts, Weibo, Xiaohongshu, market reports. Two or three credible sources confirming the problem is enough.
  • Reuse ENT207TC research β€” if your team validated a problem last semester, reuse it. Document it in your Dev Log as evidence.

Purpose: Check whether your proposed solution makes sense to users before you spend weeks building it.

Show people your idea β€” a Figma prototype or paper sketches work. You are answering: β€œDo users understand what this product does, and do they want it?”

  1. Show the interface without explaining anything
  2. Ask: β€œWhat do you think this is? What would you do first?”
  3. Observe β€” do not correct them or fill in gaps
  4. After they explore, ask: β€œWhat is confusing? What is missing?”

This is the main testing stage. Use the five-step recipe at the top of this page. Focus on watching β€” not explaining or defending.

What counts as a finding:

  • Something a user tried that didn’t match your design
  • A feature they expected that doesn’t exist
  • A label or button they misread
  • Something they didn’t notice at all
  • Tell participants: β€œWe are testing the product, not testing you. There are no wrong answers.”
  • Ask permission to take notes or record (voice or screen)
  • Do not use photos of participants without permission
  • Keep notes and recordings confidential β€” use participant numbers, not names

Testing is only valuable if you change something because of it. For each round, document all four:

  1. What you tested β€” which feature, which users, which tasks
  2. What you found β€” specific quotes and observations
  3. What you changed β€” the exact change made
  4. Why β€” connect the change back to the finding

Your Validation Report (Week 11, 10% of module) is built from evidence collected during Weeks 5–8. Keep records as you go β€” do not try to reconstruct them at the end.

⬇ Download Validation Report template β€” save a copy to your team folder now. Fill in the cover page. Add session notes as you go.

SectionWhat to include
Research MethodologyWho you recruited, how many, your session protocol
Key FindingsFindings by theme β€” direct quotes and observations, not summaries
Iteration HistoryWhat changed after each test round β€” before/after comparisons
Evidence AppendixLinks to notes, transcripts, recordings β€” must be accessible

ResourceNotes
Product Development GlossaryDefinitions for validation, iteration, user story, etc.
Development GuideSprint structure, Kanban, Dev Log template
Assessment BriefValidation Report rubric (Section 5.7.3)
This site uses anonymous analytics (Microsoft Clarity) to improve course content. No personal data is collected.
Current page
πŸ€–