Skip to content
ENT208TC Industry Readiness

Week 5: Build Sprint 1

#MissionTime
1πŸ—£οΈ Standup + Kanban40 min
2πŸ“ User stories + acceptance criteria20 min
3πŸ—οΈ Architecture with AI25 min
4πŸš€ First prototype commitremaining

Before anything else: get your whole team together. Everyone stands up β€” physically. Standing keeps it short because nobody wants to stand for long. Each person answers exactly three questions:

  1. What did I do since the last session?
  2. What will I do in this session?
  3. Any blockers? Something stopping me that I need help with.

What it sounds like:

β€œI wrote the user story for the feeding log. Today I’m adding acceptance criteria and breaking it into tasks. No blockers.”

That’s it. Thirty seconds. Move to the next person.

No problem-solving during the standup. If something needs discussion, write it down and come back to it after. The standup is for the team β€” not a status report to the teacher.

15:00

Use 2 min to time each person’s turn. Use 15 min to cap the whole standup. Use 25 min for the Kanban organisation phase that follows.

After the standup β€” 10 minutes on your Kanban:

  • Move completed tasks to Done
  • Make sure every Must-have has at least one task in the Doing column with a named owner
  • Any task with no name on it β€” give it one now
Live Q&A
● live

Most of your team is not ready to code yet β€” and that is fine. Jumping straight into code without solid user stories and acceptance criteria produces code you have to throw away. Here is the recommended split:

WhoWhat
Most of the team (PM + Designer + User Researcher)Refine the Brief if needed Β· Write or sharpen user stories Β· Add acceptance criteria Β· Confirm tech choices with AI (ask: β€œIs this stack realistic for our team in 4 weeks?β€œ)
1–2 developersExplore and evaluate agentic coding tools β€” pick a real task from the Kanban board and try it in two different tools Β· Document findings in Dev Log

This is not about coding faster. It is about making sure the code you eventually write solves the right problem.

Cursor, Qwen Code, and GitHub Copilot all work as agentic assistants β€” you describe a task and they propose and write the code. They differ in how well they handle open-ended versus specific tasks, and in what works from inside China. See AI Coding Tools for a comparison.

GitHub Copilot Coding Agent (preview, April 2025) goes one step further: assign a GitHub Issue to Copilot, it writes the code autonomously and opens a pull request for your team to review. Your developers evaluate and merge β€” or reject. See the full guide: Copilot Coding Agent β†’

The key insight from every tool: they all work better when your task is specific and your acceptance criteria are written down. Which is exactly what the rest of the team is doing right now.

:::


flowchart TD
A["πŸ“‹ PROJECT BRIEF\nproblem Β· users Β· why it matters"] --> B["🎯 MoSCoW β€” 3 Must-haves\ndecided in Week 4"]
B --> |"one per Must-have"| C["πŸ“ USER STORY + ACCEPTANCE CRITERIA\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n'As [role], I want [action],\n so that [testable outcome]'\n\nAcceptance criteria:\nHow do you know it's done?\nCan a real user test it?"]
C --> D["πŸ“Š KANBAN TASKS\n3–5 tasks per Must-have\nNamed owner Β· ≀ half a day each"]
D --> E["πŸš€ FIRST PROTOTYPE\nWorking code Β· or even a sketch\nSomething a user can react to"]
E --> F["βœ… VALIDATE\n5 users test the story\npaper sketch is enough"]
style A fill:#1d4ed8,color:#fff,stroke:#60a5fa,stroke-width:2px
style B fill:#0369a1,color:#fff,stroke:#38bdf8,stroke-width:2px
style C fill:#15803d,color:#fff,stroke:#4ade80,stroke-width:3px
style D fill:#b45309,color:#fff,stroke:#fbbf24,stroke-width:2px
style E fill:#7c3aed,color:#fff,stroke:#c084fc,stroke-width:2px
style F fill:#be123c,color:#fff,stroke:#fb7185,stroke-width:2px

User story + acceptance criteria = your PRD (customer-facing: what users get and how you know it works)

Architecture + data schema = your Technical Documentation (builder-facing: how you build it β€” started in Week 4, continues all semester)

These are two different documents. Do not mix them. Your PRD describes the experience; your Technical Documentation describes the implementation.


In Week 4 you used AI to generate an initial PRD with user stories, a feature list, and a rough architecture sketch. This week you deepen it β€” for each Must-have, you add acceptance criteria: specific, testable conditions that confirm the story is done.

User StoryAcceptance Criteria
Question it answersWho needs this, and why?How do you know it works?
Written byWhole teamWhole team (PM leads)
FormatOne sentence2–4 bullet points
ContainsRole + action + outcomeMeasurable conditions, edge cases
BecomesThe validation testThe Kanban task acceptance definition

The β€œso that [outcome]” clause of your user story is not decoration β€” it is directly what you measure when you validate. Write it like you will test it, because you will.


This week is not about quantity of output β€” it is about quality of thinking. A team that writes two sharp user stories and tests one with a real person is doing better work than a team that writes ten vague ones and tests none.

What you do this weekWhy it mattersWhere assessors look
Write user stories with a specific role and testable outcomeForces you to think about real people, not imaginary usersPRD / Technical Documentation
Add acceptance criteria to each Must-haveClear finish line for developers β€” no β€œis it done?” argumentsTechnical Documentation
Break work into named Kanban tasks with ownersIndividual contribution visible; prevents one person doing everythingCheckpoint 1 evidence
Book and run user sessions this weekStories are hypotheses β€” test early, build less wrongValidation Report
Record decisions in your Dev LogPathfinder sees your thinking; generic entries score lowDev Log (7 required total)
Push commits with clear messagesIndividual accountability β€” who did what, whenTechnical Documentation / GitHub

MiaoLog is a fictional campus cat diary app used throughout this guide.

MiaoLog project idea sketch β€” campus cat map with feeding stations, cat hotels, and PWA interface

The brief: A Progressive Web App (PWA) for XJTLU campus cat volunteers. Volunteers track feedings, health notes, and sightings β€” replacing fragmented WeChat messages. The app works on mobile without installation. A stretch goal: point the camera at any campus cat and AI identifies it by name, showing its feeding history and health notes instantly.

Must-haves (agreed Week 4): 1) Cat profile Β· 2) Feeding log Β· 3) Health diary

Could-have (stretch): 4) AI photo identification β€” snap a photo, get the cat’s name and history


A user story is a hypothesis. Write it from your Brief, then test whether real people agree.

Bad vs good:

Bad βœ—Good βœ“
β€œAs a user, I want to log cat information.""As a volunteer feeder, I want to record that I fed Xiaobai at 7pm with 100g of dry food, so that other volunteers can see she has been fed and won’t overfeed her."
"As an admin, I want to manage cats.""As a group coordinator, I want to flag that Huahua hasn’t eaten in two days, so that someone with more experience can follow up before it becomes a health issue."
"As a user, I want to view cats.""As a new volunteer, I want to see each cat’s recent feeding and health notes on one screen, so that I know what to look for before I walk around campus."
"As a user, I want to use AI.""As a new volunteer who doesn’t know the cats yet, I want to take a photo of a cat I find on campus and have the app tell me its name, last feeding time, and any health notes, so that I can help without needing to ask someone every time.”

Three Must-haves = three user stories minimum. One per Must-have. If your brief has a stretch goal involving AI or an interesting technical feature, write that user story too β€” it sharpens your thinking even if you don’t build it this sprint.

Live Q&A
● live

Step 2 β€” Acceptance criteria: make the outcome testable

Section titled β€œStep 2 β€” Acceptance criteria: make the outcome testable”

Open your PRD (the document from Week 4). Under each user story, add acceptance criteria β€” 2–4 bullets that describe exactly how you would confirm the story works with a real user.

MiaoLog β€” Must-have 2 (feeding log):

User Story:
As a volunteer feeder, I want to record who fed which cat, when, and
how much, so that other volunteers can check before feeding.
Acceptance Criteria:
βœ“ A volunteer can see the last 24h of feedings for any cat in under 5 taps
βœ“ After logging a feeding, a confirmation message shows within 2 seconds
βœ“ If two feeders log within 10 minutes for the same cat, a warning appears
βœ“ In a 1-week trial: at least 4 out of 5 volunteers check the log
before deciding to feed
UI description (for the designer):
Cat selector β†’ feeding form (time defaults to now, food type, amount)
β†’ confirmation screen β†’ return to cat profile

Notice: no database schema, no API routes, no Supabase.from() calls here. Those belong in your Technical Documentation. This document describes the experience; the other describes the implementation.

Now update your architecture (one section at the top of the PRD, or a separate linked document):

Architecture (confirmed Week 5):
Frontend: React (Vite) β†’ deployed to Vercel
Data + Storage: Supabase (PostgreSQL + file storage)
- Direct Supabase JS client from frontend (no separate backend needed)
Two platforms. Both free. Both accessible from China.
Add Vercel serverless functions only if you need
to call paid APIs with secret keys β€” most teams don't.
Live Q&A
● live

Each acceptance criterion that requires building something becomes a task. Keep tasks small β€” one person, less than half a day.

MiaoLog β€” feeding log tasks (from the acceptance criteria above):

TaskRoleEstimate
Design feeding form in Figma (cat selector + fields + confirmation)UX Designer1.5 hr
Create feeding_logs table in Supabase with schemaTechnical Lead30 min
Build feeding form component, connect to SupabaseFrontend Dev2 hr
Add duplicate-feeding warning (check last 10 min)Frontend Dev1 hr
Test with 2 actual volunteers (User Researcher)User Researcher45 min

Add every task to your Kanban board with a name. Move one to β€œDoing” before you leave.


Once a task is in Doing, write some code. The goal is not finished code β€” it is something a user can react to. A form that submits, a screen that shows data, a button that does something. Use this prompt pattern for AI assistance:

I am building [one sentence about your product].
Stack: [frontend framework + Supabase, or your team's actual stack].
I need to implement this user story:
"[paste your user story]"
Acceptance criteria:
[paste your acceptance criteria bullets]
Generate the [component / function / table definition] for this.
Include input validation and error handling.
Add inline comments explaining each step.

See AI Coding Tools for which tool to use at each stage. New to Git? The 100-second intro below covers everything you need for your first commit:

See also Git Basics for branch, commit, push, pull with commands you can copy.

Live Q&A
● live

Your user stories are hypotheses. You do not need working code to test them β€” a paper sketch or Figma mockup is enough for the first round.

This week: your User Researcher identifies and contacts 2–3 real people. Agree a time in Week 6. Add names and dates to your Dev Log now.

Ask: β€œI’m going to show you a rough sketch of something we’re building. Can you tell me if it makes sense?” Then watch β€” don’t explain.

What you’re testing: does the β€œso that [outcome]” clause describe something that person actually cares about?

See the Validation Guide for the session guide and note template.


Over 60% of teams entering Week 5 have scope or feasibility concerns flagged by their pathfinder. If yours did β€” this is the moment to cut.

The rule: if you cannot have a real user complete a task on this feature by Week 8, it is not a Must-have. Move it to Should-have and focus.

Live Q&A
● live

Mission 1 β€” Standup + Kanban (whole team, 40 min)

Section titled β€œMission 1 β€” Standup + Kanban (whole team, 40 min)”
  1. Everyone stands up β€” three questions each, max 2 min per person
  2. Move completed tasks to Done on the Kanban board
  3. Every Must-have has at least one task in Doing with a named owner
  4. Unassigned tasks β€” give them a name now
  1. Open your Week 4 PRD
  2. For each Must-have: write or refine the user story β€” specific role, specific action, testable outcome
  3. Ask: could you test this with a real person in 20 minutes? If not, rewrite it
  4. Add to Dev Log Week 5 entry

Mission 3 β€” Architecture + acceptance criteria (whole team, 25 min)

Section titled β€œMission 3 β€” Architecture + acceptance criteria (whole team, 25 min)”
  1. Under each user story, add 2–4 acceptance criteria (PM leads, everyone contributes)
  2. Confirm your architecture β€” one section in the PRD: which frontend framework, which database, where deployed
  3. Break each Must-have into Kanban tasks (3–5 each, names assigned)
  4. Note the two biggest technical risks in the Dev Log

Mission 4 β€” First prototype commit (developers, self-paced)

Section titled β€œMission 4 β€” First prototype commit (developers, self-paced)”
  1. Pick your first Kanban task. Move it to β€œDoing”
  2. Write your AI prompt β€” paste in your user story and acceptance criteria
  3. Review the output: can you explain every line?
  4. Get something running β€” even partially. Commit it to GitHub with a clear message
  5. Paste the commit link in your Dev Log entry

Identify 2–3 real people and contact them now. Book sessions for Week 6. Confirm: name, date, time. Add to Dev Log.


Four things to have done today:

  • User stories + acceptance criteria in your PRD, linked from Dev Log
  • Architecture confirmed (one paragraph or diagram in the PRD)
  • At least one Kanban task in β€œDoing” with your name on it
  • Two user sessions booked β€” dates confirmed, names recorded
This site uses anonymous analytics (Microsoft Clarity) to improve course content. No personal data is collected.
Current page
πŸ€–