If you search "agile sprint" you'll find a hundred diagrams with neat little boxes: backlog grooming, sprint planning, daily standups, retrospectives, velocity charts. It all looks very tidy. It also looks nothing like the way we actually work.
This isn't a criticism of Scrum or any other framework. Those patterns exist for a reason, and they work brilliantly for large teams coordinating across departments. But we're a small, focused studio. Our sprints are one week long, our teams are tight, and we've woven AI tooling into every phase. Here's what that actually looks like from Monday morning to Friday afternoon.
Monday: Kickoff and Scope Lock
Every sprint starts on Monday morning with a single meeting. We call it kickoff, and it rarely lasts longer than forty-five minutes. The goal is simple: agree on exactly what we're delivering by Friday.
Before that meeting happens, the project lead has already done the prep work. They've reviewed the backlog, spoken to the client if needed, and written a short brief for each task. By the time the team sits down together, we're not debating priorities β we're confirming them.
The brief for each task includes context, acceptance criteria, and any design references or API specs. We write these in plain English, not user stories. "As a user, I want to..." has its place, but when you're a team of three working on a focused product, clarity beats ceremony.
Monday is also when we set up our AI tooling for the week. That means loading the relevant codebase context into Claude, configuring Cursor workspaces, and making sure our prompt libraries are up to date with the project's conventions. This step takes maybe twenty minutes, but it pays for itself many times over during the build phase. A well-primed AI assistant that understands the project's architecture, naming conventions, and tech stack is dramatically more useful than one working from a cold start.
Tuesday to Thursday: The Build Phase
This is where the work happens. Three days of focused building, and the rhythm is straightforward.
Async Standups
We don't do daily standup meetings. Instead, each team member posts a short async update first thing in the morning: what they did yesterday, what they're doing today, and anything that's blocking them. These go into a shared channel. The project lead reads every one and steps in if something needs unblocking. The whole process takes about five minutes per person, and nobody has to stand in a circle watching someone talk about CSS fixes they don't care about.
Building with AI
Our developers work with Claude and Cursor as constant companions. This isn't about asking an AI to write an entire feature and pressing "accept." It's more nuanced than that, and it's worth explaining because clients often ask us about it.
A typical workflow looks like this: a developer picks up a task, reads the brief, and starts by thinking through the architecture. They might sketch out a component structure or a data flow on paper. Then they open Cursor and start building, using Claude to accelerate the parts that are mechanical β generating boilerplate, writing test scaffolding, converting a design spec into initial markup, or working through a tricky API integration.
The developer is always the decision-maker. They read every line the AI produces, refactor what needs refactoring, and reject what doesn't fit. But the speed gain is real. Tasks that used to take a full day often take half a day. A component that once required an hour of wiring up imports, props, and basic state management now takes fifteen minutes, leaving the developer free to spend their time on the logic that actually matters.
- Code generation: Boilerplate, repetitive patterns, and initial implementations get drafted by AI and refined by the developer.
- Code review assist: Before pushing a PR, developers use Claude to review their own code for edge cases, security issues, and performance concerns.
- Documentation: Inline comments, README updates, and API docs are drafted by AI and edited for accuracy.
- Debugging: When something breaks, feeding the error and context to Claude often surfaces the fix faster than a Stack Overflow search ever could.
Pair Programming
We still pair program. AI hasn't replaced that. When two developers sit down together to work through a complex problem β a tricky state management pattern, a performance bottleneck, an architectural decision β that conversation produces better outcomes than either person working alone. What's changed is that the pair now has a third collaborator. It's common to see two developers working through a problem with a Claude session open, using it to quickly prototype three different approaches before committing to one.
The biggest shift isn't speed β it's confidence. When you can prototype and test an approach in twenty minutes instead of two hours, you're far more willing to explore the right solution rather than settling for the first one that works.
Friday: Review, Demo, Deploy
Friday is structured around three things: internal review, client demo, and deployment.
Internal Review
Friday morning starts with the team reviewing everything that was built during the week. This isn't a formal QA gate β we test continuously during the build phase β but it's a chance to look at the sprint's output as a whole. Does everything hang together? Are there rough edges we missed? Does the user experience flow the way we intended?
We run through the work on a staging environment, usually screen-sharing so the whole team can see it. If something needs a quick fix, we handle it there and then. If something needs more work, it goes into next week's backlog with a clear description of what's left.
Client Demo
Friday afternoon is demo time. We get on a call with the client and walk them through everything we've built that week. This is the moment that makes the entire process work, and it's one of the things clients consistently tell us they value most.
Every single week, the client sees real, working software. Not mockups. Not presentations. Not a Jira board with green ticks. They see the actual product, running on a staging server, doing the thing it's supposed to do. They can click around, ask questions, point out things that don't feel right, and suggest changes. Those notes go straight into next Monday's kickoff.
Deploy or Stage
If the sprint's work is ready for production, we deploy on Friday afternoon after the demo. Our CI/CD pipelines handle the heavy lifting β automated tests run, builds compile, and everything goes through a final check before hitting production. If the client wants more time to review, or if we're building toward a larger release, the work stays on staging until it's ready.
Either way, nothing sits in a branch gathering dust. Code that's written during the week is merged, reviewed, and either deployed or staged by end of day Friday. There's no "we'll get to that next month" purgatory.
How AI Accelerates Each Phase
It's worth stepping back and looking at how AI tooling affects the sprint as a whole, not just the build phase.
- Monday kickoff: Briefs are clearer because we use Claude to stress-test acceptance criteria before the meeting. "What edge cases are we missing? What questions will the developer have?" This means fewer mid-week surprises.
- Daily async updates: Short and focused because developers spend less time stuck. When you can get unstuck in five minutes by describing your problem to an AI, you don't end up writing "still blocked on the API issue" three days in a row.
- Build phase: The raw throughput increase is significant. We estimate AI tooling gives us a 30-40% speed gain on implementation tasks, which translates to more features per sprint or more time for polish and testing.
- Friday review: More complete work to review because the team delivered more during the week. Fewer "we ran out of time" conversations.
- Client demo: More to show. Every week. Clients see consistent, visible progress, which builds trust and makes the whole relationship smoother.
Why Clients Love Transparency
The weekly demo isn't just a nice-to-have. It's the foundation of how we build trust with clients. In a traditional agency model, you might brief a project, wait six weeks, and hope what comes back matches what you had in mind. That gap β between expectation and delivery β is where most project relationships break down.
We've eliminated that gap entirely. When a client sees working software every Friday, several things happen naturally:
- Course corrections are small. If something's heading in the wrong direction, we catch it after one week of work, not six. A small adjustment on Monday is far cheaper than a major rework in month three.
- Trust builds quickly. After three or four Fridays of seeing real progress, clients stop worrying about whether we're on track. They can see that we are.
- Feedback is better. It's much easier to give useful feedback on a working product than on a wireframe or a spec document. Clients say things like "this feels slow" or "I expected this button to do something else" β feedback that only comes from using real software.
- There are no surprises at launch. By the time we ship, the client has seen and approved every piece of the product, week by week. Launch day is a celebration, not a reveal.
This approach isn't revolutionary. It's just disciplined execution of a simple idea: build something real every week, show it to the people paying for it, and adjust based on what they tell you. The AI tooling makes it possible to deliver more within each sprint, but the principle would hold even without it.
If you've worked with agencies that disappear for weeks at a time and resurface with something that doesn't quite match what you asked for, you know why this matters. And if you haven't β well, we'd like to keep it that way.