Week 5: Build Sprint 1
| # | Mission | Time |
|---|---|---|
| 1 | π£οΈ Standup + Kanban | 40 min |
| 2 | π User stories + acceptance criteria | 20 min |
| 3 | ποΈ Architecture with AI | 25 min |
| 4 | π First prototype commit | remaining |
Mission 1 β Team standup + Kanban
Section titled βMission 1 β Team standup + KanbanβBefore anything else: get your whole team together. Everyone stands up β physically. Standing keeps it short because nobody wants to stand for long. Each person answers exactly three questions:
- What did I do since the last session?
- What will I do in this session?
- Any blockers? Something stopping me that I need help with.
What it sounds like:
βI wrote the user story for the feeding log. Today Iβm adding acceptance criteria and breaking it into tasks. No blockers.β
Thatβs it. Thirty seconds. Move to the next person.
No problem-solving during the standup. If something needs discussion, write it down and come back to it after. The standup is for the team β not a status report to the teacher.
Use 2 min to time each personβs turn. Use 15 min to cap the whole standup. Use 25 min for the Kanban organisation phase that follows.
After the standup β 10 minutes on your Kanban:
- Move completed tasks to Done
- Make sure every Must-have has at least one task in the Doing column with a named owner
- Any task with no name on it β give it one now
How your team should split today
Section titled βHow your team should split todayβMost of your team is not ready to code yet β and that is fine. Jumping straight into code without solid user stories and acceptance criteria produces code you have to throw away. Here is the recommended split:
| Who | What |
|---|---|
| Most of the team (PM + Designer + User Researcher) | Refine the Brief if needed Β· Write or sharpen user stories Β· Add acceptance criteria Β· Confirm tech choices with AI (ask: βIs this stack realistic for our team in 4 weeks?β) |
| 1β2 developers | Explore and evaluate agentic coding tools β pick a real task from the Kanban board and try it in two different tools Β· Document findings in Dev Log |
This is not about coding faster. It is about making sure the code you eventually write solves the right problem.
AI tools worth exploring right now
Section titled βAI tools worth exploring right nowβCursor, Qwen Code, and GitHub Copilot all work as agentic assistants β you describe a task and they propose and write the code. They differ in how well they handle open-ended versus specific tasks, and in what works from inside China. See AI Coding Tools for a comparison.
GitHub Copilot Coding Agent (preview, April 2025) goes one step further: assign a GitHub Issue to Copilot, it writes the code autonomously and opens a pull request for your team to review. Your developers evaluate and merge β or reject. See the full guide: Copilot Coding Agent β
The key insight from every tool: they all work better when your task is specific and your acceptance criteria are written down. Which is exactly what the rest of the team is doing right now.
:::
How everything connects
Section titled βHow everything connectsβflowchart TD A["π PROJECT BRIEF\nproblem Β· users Β· why it matters"] --> B["π― MoSCoW β 3 Must-haves\ndecided in Week 4"] B --> |"one per Must-have"| C["π USER STORY + ACCEPTANCE CRITERIA\nββββββββββββββββββββββββββββββ\n'As [role], I want [action],\n so that [testable outcome]'\n\nAcceptance criteria:\nHow do you know it's done?\nCan a real user test it?"] C --> D["π KANBAN TASKS\n3β5 tasks per Must-have\nNamed owner Β· β€ half a day each"] D --> E["π FIRST PROTOTYPE\nWorking code Β· or even a sketch\nSomething a user can react to"] E --> F["β
VALIDATE\n5 users test the story\npaper sketch is enough"]
style A fill:#1d4ed8,color:#fff,stroke:#60a5fa,stroke-width:2px style B fill:#0369a1,color:#fff,stroke:#38bdf8,stroke-width:2px style C fill:#15803d,color:#fff,stroke:#4ade80,stroke-width:3px style D fill:#b45309,color:#fff,stroke:#fbbf24,stroke-width:2px style E fill:#7c3aed,color:#fff,stroke:#c084fc,stroke-width:2px style F fill:#be123c,color:#fff,stroke:#fb7185,stroke-width:2pxUser story + acceptance criteria = your PRD (customer-facing: what users get and how you know it works)
Architecture + data schema = your Technical Documentation (builder-facing: how you build it β started in Week 4, continues all semester)
These are two different documents. Do not mix them. Your PRD describes the experience; your Technical Documentation describes the implementation.
What is the PRD, exactly?
Section titled βWhat is the PRD, exactly?βIn Week 4 you used AI to generate an initial PRD with user stories, a feature list, and a rough architecture sketch. This week you deepen it β for each Must-have, you add acceptance criteria: specific, testable conditions that confirm the story is done.
| User Story | Acceptance Criteria | |
|---|---|---|
| Question it answers | Who needs this, and why? | How do you know it works? |
| Written by | Whole team | Whole team (PM leads) |
| Format | One sentence | 2β4 bullet points |
| Contains | Role + action + outcome | Measurable conditions, edge cases |
| Becomes | The validation test | The Kanban task acceptance definition |
The βso that [outcome]β clause of your user story is not decoration β it is directly what you measure when you validate. Write it like you will test it, because you will.
PRD (Word document on XJTLU OneDrive, started Week 4): user stories + acceptance criteria + feature list + high-level architecture (which platforms, which stack)
Technical Documentation (the Week 11 portfolio document you start now): data schema, API design, deployment guide, IP strategy β the builder detail. β¬ Download template β save a copy to your team folder and fill in the cover page now.
Validation Report (built from user sessions throughout Weeks 5β8): β¬ Download template β create the folder and add notes after each session. You will thank yourself in Week 11.
What good process looks like this week
Section titled βWhat good process looks like this weekβThis week is not about quantity of output β it is about quality of thinking. A team that writes two sharp user stories and tests one with a real person is doing better work than a team that writes ten vague ones and tests none.
| What you do this week | Why it matters | Where assessors look |
|---|---|---|
| Write user stories with a specific role and testable outcome | Forces you to think about real people, not imaginary users | PRD / Technical Documentation |
| Add acceptance criteria to each Must-have | Clear finish line for developers β no βis it done?β arguments | Technical Documentation |
| Break work into named Kanban tasks with owners | Individual contribution visible; prevents one person doing everything | Checkpoint 1 evidence |
| Book and run user sessions this week | Stories are hypotheses β test early, build less wrong | Validation Report |
| Record decisions in your Dev Log | Pathfinder sees your thinking; generic entries score low | Dev Log (7 required total) |
| Push commits with clear messages | Individual accountability β who did what, when | Technical Documentation / GitHub |
Worked example: MiaoLog ε΅Log
Section titled βWorked example: MiaoLog ε΅LogβMiaoLog is a fictional campus cat diary app used throughout this guide.

The brief: A Progressive Web App (PWA) for XJTLU campus cat volunteers. Volunteers track feedings, health notes, and sightings β replacing fragmented WeChat messages. The app works on mobile without installation. A stretch goal: point the camera at any campus cat and AI identifies it by name, showing its feeding history and health notes instantly.
Must-haves (agreed Week 4): 1) Cat profile Β· 2) Feeding log Β· 3) Health diary
Could-have (stretch): 4) AI photo identification β snap a photo, get the catβs name and history
Step 1 β User stories: write now, test this week
Section titled βStep 1 β User stories: write now, test this weekβA user story is a hypothesis. Write it from your Brief, then test whether real people agree.
Bad vs good:
| Bad β | Good β |
|---|---|
| βAs a user, I want to log cat information." | "As a volunteer feeder, I want to record that I fed Xiaobai at 7pm with 100g of dry food, so that other volunteers can see she has been fed and wonβt overfeed her." |
| "As an admin, I want to manage cats." | "As a group coordinator, I want to flag that Huahua hasnβt eaten in two days, so that someone with more experience can follow up before it becomes a health issue." |
| "As a user, I want to view cats." | "As a new volunteer, I want to see each catβs recent feeding and health notes on one screen, so that I know what to look for before I walk around campus." |
| "As a user, I want to use AI." | "As a new volunteer who doesnβt know the cats yet, I want to take a photo of a cat I find on campus and have the app tell me its name, last feeding time, and any health notes, so that I can help without needing to ask someone every time.β |
Three Must-haves = three user stories minimum. One per Must-have. If your brief has a stretch goal involving AI or an interesting technical feature, write that user story too β it sharpens your thinking even if you donβt build it this sprint.
Step 2 β Acceptance criteria: make the outcome testable
Section titled βStep 2 β Acceptance criteria: make the outcome testableβOpen your PRD (the document from Week 4). Under each user story, add acceptance criteria β 2β4 bullets that describe exactly how you would confirm the story works with a real user.
MiaoLog β Must-have 2 (feeding log):
User Story: As a volunteer feeder, I want to record who fed which cat, when, and how much, so that other volunteers can check before feeding.
Acceptance Criteria: β A volunteer can see the last 24h of feedings for any cat in under 5 taps β After logging a feeding, a confirmation message shows within 2 seconds β If two feeders log within 10 minutes for the same cat, a warning appears β In a 1-week trial: at least 4 out of 5 volunteers check the log before deciding to feed
UI description (for the designer): Cat selector β feeding form (time defaults to now, food type, amount) β confirmation screen β return to cat profileNotice: no database schema, no API routes, no Supabase.from() calls here. Those belong in your Technical Documentation. This document describes the experience; the other describes the implementation.
You now have a PRD with user stories and acceptance criteria. Before writing the architecture yourself, try this:
Step 1 β Prepare your PRD summary. Write or copy a plain-language description of your product. Include: what it does, who uses it (with specific roles), your Must-haves as user stories with acceptance criteria, and any hard constraints (mobile, China access, student team, 4-week timeline, free-tier only). Do NOT include any technology choices yet β let the AI make recommendations.
Here is what MiaoLogβs input looks like:
Product: MiaoLog ε΅Log β a Progressive Web App for XJTLU campus cat volunteers.
Problem: Campus cat welfare depends on volunteers coordinating through informal WeChat groups. There is no shared record of which cats have been fed, their health history, or where they live. Overfeeding and missed health issues are common.
Users:
- Volunteer feeder β a student who feeds cats 2β3 times per week, often in a hurry
- Group coordinator β a senior volunteer who manages the whole colony, needs oversight
- New volunteer β a student joining for the first time who does not know any of the cats
Must-haves and acceptance criteria:
Cat profile β As a volunteer, I want to see each campus catβs name, photo, home area, and current health status, so that I know which cat I am looking at before I feed or help it. β Profile loads in under 3 seconds on a mobile connection β Shows last feeding time and health flag at a glance β Works offline if the volunteer has visited before (PWA cache)
Feeding log β As a volunteer feeder, I want to record who fed which cat, when, and how much, so that other volunteers can check before feeding and avoid overfeeding. β A volunteer can log a feeding in under 5 taps β The last 24h of feedings for any cat is visible without scrolling β If two feeders log within 10 minutes for the same cat, a warning appears
Health diary β As a group coordinator, I want to flag health concerns for a specific cat with a severity level, so that other volunteers know to pay attention or escalate to a vet. β A note can be added with one of three severity levels: normal, concern, urgent β Urgent notes trigger a visible banner on the catβs profile β The coordinator can mark a note as resolved
Constraints: must work on mobile without installation (PWA), accessible from mainland China, student team of 4, 4-week build timeline, free-tier cloud services only, no budget.
Step 2 β Open a leading AI model in thinking/reasoning mode and paste this prompt. Good options: Xipu AI (select any model with βthinkingβ enabled), DeepSeek-R1, or ChatGPT with reasoning. Try it in two different models and compare β you will often get meaningfully different architectural recommendations.
You are a senior software architect reviewing a product requirements document written by a university student team.
[PASTE YOUR PRD SUMMARY HERE]
Based on this PRD, please provide:
Technical Architecture β recommend a complete stack (frontend, backend, database, hosting, any third-party APIs). Justify each choice for a small student team with 4 weeks and free-tier budgets only.
Data Model β outline the key entities and relationships needed to support the user stories above. Name the tables and their main fields.
Security Considerations β what are the main risks in this type of product, and how should we address them at a student-project level?
Scalability β if 200 students used this simultaneously, where would the system struggle? What would you change?
Accessibility β what accessibility requirements are implied by the user stories? What might a student team overlook?
Quality and Testing β what should we test first? What are the highest-risk parts of this product for a first-time team?
Be specific and practical. Avoid enterprise-level complexity. This team needs to ship something a real user can test within 4 weeks.
What to expect: The AI will surface concerns your team probably hasnβt discussed β authentication edge cases, simultaneous submissions, offline sync conflicts, whether a photo upload will work on slow campus Wi-Fi. These are real problems worth a 5-minute team discussion before you write any code.
Bonus: If you ran two models, note where they disagreed. That disagreement is itself a design decision your team needs to make. Record it in your Dev Log.
The architecture it generates becomes the foundation of your Technical Documentation. Copy it in, then refine it with your teamβs actual decisions.
Now update your architecture (one section at the top of the PRD, or a separate linked document):
Architecture (confirmed Week 5):
Frontend: React (Vite) β deployed to VercelData + Storage: Supabase (PostgreSQL + file storage) - Direct Supabase JS client from frontend (no separate backend needed)
Two platforms. Both free. Both accessible from China.Add Vercel serverless functions only if you needto call paid APIs with secret keys β most teams don't.If your team studied UML last semester, a domain model helps clarify what your system stores and how the pieces connect β before you touch any code. You do not need one to pass; it is useful if your data model is complex.
MiaoLog domain model (Mermaid class diagram):
classDiagram direction LR
class Cat { +name: String +photoUrl: String +colour: String +homeArea: String +status: active | missing }
class Volunteer { +name: String +role: feeder | coordinator +contact: String }
class FeedingLog { +timestamp: DateTime +foodType: String +grams: Int }
class Sighting { +timestamp: DateTime +location: String +photoUrl: String }
class AIIdentification { +confidence: Float +modelUsed: String +timestamp: DateTime }
class HealthNote { +note: String +severity: normal | concern | urgent +timestamp: DateTime }
Volunteer "1" --> "*" FeedingLog : logs Volunteer "1" --> "*" Sighting : reports Volunteer "1" --> "*" HealthNote : writes FeedingLog "*" --> "1" Cat : for Sighting "1" --> "0..1" AIIdentification : triggers AIIdentification "*" --> "1" Cat : identifies HealthNote "*" --> "1" Cat : aboutNotice how AIIdentification links a Sighting photo to a Cat β that is the stretch feature in one relationship. If you can draw this diagram for your own product, your architecture conversation at Checkpoint will be much stronger.
Step 3 β Kanban tasks from acceptance criteria
Section titled βStep 3 β Kanban tasks from acceptance criteriaβEach acceptance criterion that requires building something becomes a task. Keep tasks small β one person, less than half a day.
MiaoLog β feeding log tasks (from the acceptance criteria above):
| Task | Role | Estimate |
|---|---|---|
| Design feeding form in Figma (cat selector + fields + confirmation) | UX Designer | 1.5 hr |
Create feeding_logs table in Supabase with schema | Technical Lead | 30 min |
| Build feeding form component, connect to Supabase | Frontend Dev | 2 hr |
| Add duplicate-feeding warning (check last 10 min) | Frontend Dev | 1 hr |
| Test with 2 actual volunteers (User Researcher) | User Researcher | 45 min |
Add every task to your Kanban board with a name. Move one to βDoingβ before you leave.
Step 4 β First prototype
Section titled βStep 4 β First prototypeβOnce a task is in Doing, write some code. The goal is not finished code β it is something a user can react to. A form that submits, a screen that shows data, a button that does something. Use this prompt pattern for AI assistance:
I am building [one sentence about your product].Stack: [frontend framework + Supabase, or your team's actual stack].
I need to implement this user story:"[paste your user story]"
Acceptance criteria:[paste your acceptance criteria bullets]
Generate the [component / function / table definition] for this.Include input validation and error handling.Add inline comments explaining each step.Your pathfinder will ask you to explain what your prototype does. If you cannot explain what a function does, you cannot defend using it. AI-generated code you do not understand is a liability.
See AI Coding Tools for which tool to use at each stage. New to Git? The 100-second intro below covers everything you need for your first commit:
See also Git Basics for branch, commit, push, pull with commands you can copy.
Validation β plan this week, run in Week 6
Section titled βValidation β plan this week, run in Week 6βYour user stories are hypotheses. You do not need working code to test them β a paper sketch or Figma mockup is enough for the first round.
This week: your User Researcher identifies and contacts 2β3 real people. Agree a time in Week 6. Add names and dates to your Dev Log now.
Ask: βIβm going to show you a rough sketch of something weβre building. Can you tell me if it makes sense?β Then watch β donβt explain.
What youβre testing: does the βso that [outcome]β clause describe something that person actually cares about?
See the Validation Guide for the session guide and note template.
Scope check
Section titled βScope checkβOver 60% of teams entering Week 5 have scope or feasibility concerns flagged by their pathfinder. If yours did β this is the moment to cut.
The rule: if you cannot have a real user complete a task on this feature by Week 8, it is not a Must-have. Move it to Should-have and focus.
This weekβs missions
Section titled βThis weekβs missionsβMission 1 β Standup + Kanban (whole team, 40 min)
Section titled βMission 1 β Standup + Kanban (whole team, 40 min)β- Everyone stands up β three questions each, max 2 min per person
- Move completed tasks to Done on the Kanban board
- Every Must-have has at least one task in Doing with a named owner
- Unassigned tasks β give them a name now
Mission 2 β User stories (whole team, 20 min)
Section titled βMission 2 β User stories (whole team, 20 min)β- Open your Week 4 PRD
- For each Must-have: write or refine the user story β specific role, specific action, testable outcome
- Ask: could you test this with a real person in 20 minutes? If not, rewrite it
- Add to Dev Log Week 5 entry
Mission 3 β Architecture + acceptance criteria (whole team, 25 min)
Section titled βMission 3 β Architecture + acceptance criteria (whole team, 25 min)β- Under each user story, add 2β4 acceptance criteria (PM leads, everyone contributes)
- Confirm your architecture β one section in the PRD: which frontend framework, which database, where deployed
- Break each Must-have into Kanban tasks (3β5 each, names assigned)
- Note the two biggest technical risks in the Dev Log
Mission 4 β First prototype commit (developers, self-paced)
Section titled βMission 4 β First prototype commit (developers, self-paced)β- Pick your first Kanban task. Move it to βDoingβ
- Write your AI prompt β paste in your user story and acceptance criteria
- Review the output: can you explain every line?
- Get something running β even partially. Commit it to GitHub with a clear message
- Paste the commit link in your Dev Log entry
Validation (User Researcher, today)
Section titled βValidation (User Researcher, today)βIdentify 2β3 real people and contact them now. Book sessions for Week 6. Confirm: name, date, time. Add to Dev Log.
Before you leave
Section titled βBefore you leaveβUser stories + acceptance criteria β PRD (assessed at Checkpoint 1 and in Technical Documentation). Named Kanban tasks β individual contribution evidence (Checkpoint 1). First commit β Technical Documentation + Dev Log. Validation bookings β Validation Report. Everything today maps directly to something assessed.
Four things to have done today:
- User stories + acceptance criteria in your PRD, linked from Dev Log
- Architecture confirmed (one paragraph or diagram in the PRD)
- At least one Kanban task in βDoingβ with your name on it
- Two user sessions booked β dates confirmed, names recorded
Was this page helpful?