Week 7: Sprint 3 + Architecture + AI Agents
Session anchor β write this down before you open your laptop
What matters most before Demo Day β and is that what your team is working on right now?
Take a pen and paper. Discuss as a team. Building more is not always the answer β strong validation evidence can matter more than a polished prototype. Return to this at the end of the session.
| # | Mission | Time |
|---|---|---|
| 0 | π Architecture + Validation recap | First 30 min |
| 1 | π£οΈ Standup + sprint planning | 25 min |
| 2 | π¬ Validation β second round | Self-paced |
| 3 | π€ AI agents β working smarter | Self-paced |
| 4 | βοΈ Build sprint | Remaining time |
Mission 0 β Architecture + Validation recap
Section titled βMission 0 β Architecture + Validation recapβNo direct questions to Stefan during this block. Post to the Live Q&A instead β Stefan works through the queue live in Hour 3.
Part A β System overview in your Technical Documentation
Section titled βPart A β System overview in your Technical DocumentationβIn Week 5 your team decided on a technical direction. Now you have been building for two weeks. This week: update the system overview section of your Technical Documentation to describe what you have actually built β not what you originally planned.
This is a conceptual description, not an engineering design. You are describing the main logical parts of your system and how they relate to each other. Engineering students can add technical depth from their major β but comprehensiveness of reasoning matters more than technical correctness, and it is not required for a top score.
Three questions to answer in plain language:
| Question | What to write | MiaoLog example |
|---|---|---|
| What are the main parts? | Name the logical components β the user-facing app, where data is stored, any external service you use | βA mobile web app. A database that stores cats, feedings, and health notes. No server β the app talks directly to the database.β |
| How does data move? | Describe one user action from start to finish | βVolunteer opens the app β taps βLog feedingβ β form submits β database saves the record β the feeding appears in the list immediately.β |
| Why did you choose each tool? | One sentence per choice, in your own words β why, not just what | βWe chose Supabase because it generates a full API from our database β the app can read and write data directly from the browser, no server code needed.β |
What assessors look for: Reasoning, not technical sophistication. A team that writes βwe chose X because Yβ with a clear reason scores higher than one that lists tools without explanation β even if the tools listed are more technically impressive.
Template: β¬ Technical Documentation β open the architecture section and update it today.
Part B β Validation: where you should be now
Section titled βPart B β Validation: where you should be nowβThe validation process has four stages. Here is the map:
flowchart LR A["π Stage 1\nProblem\nValidation"] --> B["π‘ Stage 2\nConcept\nTesting"] B --> C["π§ͺ Stage 3\nUsability\nTesting"] C --> D["π Stage 4\nClose\nthe Loop"]
style A fill:#7c3aed,color:#fff,stroke:#a78bfa,stroke-width:2px style B fill:#b45309,color:#fff,stroke:#fbbf24,stroke-width:2px style C fill:#15803d,color:#fff,stroke:#4ade80,stroke-width:2px style D fill:#1d4ed8,color:#fff,stroke:#60a5fa,stroke-width:2px| Stage | What you do | Evidence for Validation Report |
|---|---|---|
| 1 β Problem | Talk to real people who have your problem | Interview notes, Week 3 research |
| 2 β Concept | Show an early sketch or mockup β do not explain it | Notes on what they did and said |
| 3 β Usability | Give a real task. Stay quiet. Record what happens. | Session notes: who, what shown, what they did, what surprised you |
| 4 β Iterate | Change something. Test again. Document before and after. | Before/after screenshots + βparticipant X couldnβt find Y, so we moved it to Zβ |
Where most teams are in Week 7: Stages 3 and 4. You should have run at least 2 sessions by now (Week 6 first sessions). You need 5+ documented sessions total for the Validation Report.
Template: β¬ Validation Report β record todayβs session in the notes section while it is fresh.
Mission 1 β Standup + sprint planning
Section titled βMission 1 β Standup + sprint planningβStanding ritual β every session from Week 5 through Week 9. Three questions each, two minutes per person:
- What did I do since the last session?
- What will I do in this session?
- Any blockers?
After the standup β update your Kanban:
- Move completed tasks to Done
- Every developer has exactly one task in Doing with their name on it
- Sprint goal check: what did you say last week? Did you ship it?
Sprint planning question for Week 7: By the end of this session, what will a real user be able to do that they could not do last week? One sentence. Write it on the board.
Team discussion opener: What matters most before Demo Day β and is that what your team is working on right now? Building more is not always the answer. Write your answer next to the sprint goal.
Mission 2 β Validation β second round
Section titled βMission 2 β Validation β second roundβBy Week 7 you should be in Stage 3: Usability Testing. Run real sessions with people who are not on your team.
Quick session recipe (20 minutes):
- Opening (2 min): βWe are testing our product, not you. There are no wrong answers. Please think out loud as you go.β
- Give a task (not a question): βImagine you want to [user goal]. Show me what you would do.β β then stop talking.
- Observe (15 min): Note what they try first, where they pause, what they say. Do not help them.
- Debrief (3 min): βWas anything confusing? What did you expect to happen when [the thing that failed]?β
- Write it up (10 min after): One paragraph. Who (role, not name), what you showed, what they did, one thing that surprised you.
What changes this week: You have run at least one session now. You know what broke. Fix one thing based on that finding β then test the fix in a second session this week.
See the Validation Guide for the full session template and note-taking format.
Mission 3 β AI agents β working smarter
Section titled βMission 3 β AI agents β working smarterβBy now most teams have used AI tools to help build their product. Here is the pattern that gets better results β based on what has worked and not worked across many teams this semester.
Why vague prompts produce bad results
Section titled βWhy vague prompts produce bad resultsβAn AI agent builds what you describe. The less context you give, the more it guesses. When it guesses wrong, you get something generic that does not match your product.
Vague prompt:
βBuild me a login page.β
Result: A generic form with no connection to your product, your users, or how you defined βdoneβ. You spend an hour starting over.
Structured prompt (what actually works):
βI am building [one sentence about your product]. Stack: [frontend] + [database/backend]. Here is the user story I am implementing: [paste the user story]. Acceptance criteria: [paste the bullets β the things that must be true for this to count as finished]. Generate the [screen / feature / form] for this. Do not add features beyond these acceptance criteria.β
The difference: the AI now knows what to build, how to build it, and when to stop.
Use your existing documents as input
Section titled βUse your existing documents as inputβYou have already written the inputs the AI needs. Use them:
| Document | What to paste as context |
|---|---|
| User story | The full story β βAs a [role], I want [action], so that [outcome]β |
| Acceptance criteria | The 2β4 bullets under each user story β they define the finish line |
| Architecture | Your stack β prevents the AI from suggesting incompatible tools |
| Design description | Describe the screen layout in plain language: βThere is a card at the top with the catβs name and photo. Below it, a list of feeding entries. Each entry shows time, food type, and amount.β |
Skip the copy-paste β use a spec folder
Section titled βSkip the copy-paste β use a spec folderβIf you are using an AI coding agent (a tool that works inside your project, not just a chat window), you do not need to paste context into every prompt. Add these files to your project once β the agent reads them at the start of every session.
Worked example β all files, filled in
MiaoLog (ε΅Log) β the campus cat app from this course
A complete spec folder: product requirements, user roles, user stories with acceptance criteria, technology choices with rationale, and a full UI spec. Open it, read it, then model your own files on it.
View MiaoLog-WorkedExample on GitHub βWhat goes in each file β with MiaoLog examples:
your-project/βββ product-requirements.md β what it is, problem, requirements, out of scopeβββ user-roles.md β who the users are and what each role can doβββ user-stories.md β user stories + acceptance criteriaβββ technology-choices.md β your confirmed tools and why you chose each oneβββ ui-spec.md β colours, fonts, screen layouts (optional)βββ designs/ β screenshots or photos of your design (optional) βββ home-screen.pngproduct-requirements.md β problem, requirements table, constraints, out of scope:
MiaoLog is a mobile web app for campus cat volunteers. Volunteers log feedings and health notes. Group coordinators monitor all cats. Core requirement R1: volunteer can log a feeding in under 30 seconds.
user-stories.md β copy from your PRD, one story per feature:
As a volunteer feeder, I want to record who fed which cat, when, and how much, so that other volunteers can check before feeding. Done: volunteer sees last 24h of feedings in under 5 taps Β· confirmation appears after logging Β· warning shows if same cat fed twice in 10 minutes
technology-choices.md β your confirmed tools and the reason for each:
Frontend: Vue 3 β team knows it. Database: Supabase β generates a full API from our database, the app reads and writes directly from the browser with no server code. No native app β mobile browser is enough.
Once the folder exists, start each AI session with:
βRead the files in this folder. Then help me build: [paste a user story from user-stories.md].β
The agent reads your product description, your stack, your acceptance criteria, and your design images β all at once. No copy-paste. Each new task starts with the same one-line prompt.
The AI agent loop β one task at a time
Section titled βThe AI agent loop β one task at a timeβThe teams that get the most from AI agents are not the ones who write the biggest prompts. They are the ones who break work into small tasks and test each result before moving on.
1. Pick ONE task from Kanban2. Write the structured prompt (user story + acceptance criteria + stack)3. Generate the output4. Try it β does it do what the user story says?5. If yes β save your progress. If no β paste the result back in and ask the AI to fix it.Common patterns that have caused problems
Section titled βCommon patterns that have caused problemsβ| What teams did | What went wrong | Better approach |
|---|---|---|
| Asked AI to βbuild the whole appβ in one prompt | Output was generic and did not match any user story | One user story at a time |
| Used AI output without testing it | Feature looked right but did not match what the user story required | Try it out before moving on |
| Gave no stack context | AI suggested tools not accessible from China | Always include your confirmed stack |
| Described the UI without acceptance criteria | AI added features nobody asked for | Paste your acceptance criteria as the stop signal |
| Used design tool name as a verb (βFigma thisβ) | AI generated design files, not a working feature | Describe the layout in plain language or upload the image |
Mission 4 β Build sprint
Section titled βMission 4 β Build sprintβSprint 3 is the last sprint with comfortable time before Demo Day. Use it.
If your team is blocked:
| Blocker | What to do |
|---|---|
| Cannot agree on what to build | Look at your user stories β what is the smallest done condition not yet working? |
| AI output does not work | Paste the error back in, include your stack and exact error message |
| Feature is too large for one session | Split it β front end only, or data model only, or just the UI with no backend yet |
| Someone is not contributing | Name it in the standup, note it in the Dev Log, message your pathfinder |
Commit standard:
feat: user can submit a new feeding entryfix: feeding form no longer clears on validation errordocs: update architecture section with confirmed stackEvery commit that goes into your Dev Log should have a message a pathfinder can read in three seconds and understand what changed.
Before you leave
Section titled βBefore you leaveβFive things to have done today:
/ completed — progress saved in your browser
Was this page helpful?