Skip to content

DPEV Reference Guide

Discuss → Plan → Execute → Verify The SDLC model for the Agentic Workforce Era.

Unfamiliar term? See the AWE Glossary.

Version: 1.0 | April 2026 | Internal — Confidential


1. Overview

DPEV is how Worksuite builds things. Every Build Cycle, every Technical Solution Build, every Platform Solution Build follows the same four phases: Discuss what needs to happen, Plan how to do it, Execute the work, Verify it's right.

That's it. Four steps, repeated in tight loops.

Why DPEV exists

Traditional SDLC was built for a world where information moved slowly and decisions were expensive. Requirements traveled through product managers, got translated into specs, queued in backlogs, groomed into sprints, built by engineers who weren't in the original conversation, tested by QA who'd never met the customer, and released through a ceremony that existed mostly to coordinate anxiety.

Every handoff degraded information. Every layer added latency. By the time something shipped, the original need had often changed, and the people who built it were working from a game of telephone.

DPEV eliminates all of that:

  • Information stays with the people doing the work. The BSA who heard the customer need works directly with the TSA who builds the solution. No telephone.
  • Speed is the competitive advantage. A 3-day Build Cycle means the worst case is 3 days of wasted effort, not 3 months. That changes every decision.
  • Every cycle ships. Not "we're 60% done." Ships. Deployed. In production. Getting feedback.
  • Agentic tooling compresses the work that remains. What used to take a team of five a sprint takes a TSA with agent support a Build Cycle.

When to use DPEV

All builds. No exceptions.

  • A TSA building a new API endpoint for a customer? DPEV.
  • A BSA configuring a customer's payment workflow? DPEV.
  • Three TSAs collaborating on a new platform module? DPEV for each BC within each TSB.
  • A Worksuite Services IC contributing to a Build Cycle? DPEV.

If it produces a shippable artifact, it follows DPEV.


2. The Four Phases

2.1 Discuss

Purpose: Align on what needs to be built and why. No code. No architecture. Just shared understanding.

Who's in the room: - The STO (Single Threaded Owner) for the work product — usually the TSA for a BC or TSB - The BSA who owns the customer relationship (for customer-driven work) - Anyone with relevant domain expertise (Worksuite Services ICs, other TSAs) - Agentic tools for context retrieval (pulling customer data, prior solutions, platform capabilities)

What happens: 1. The BSA explains the customer need or the TSA explains the pattern they've observed 2. Participants ask questions until the problem is clearly understood 3. The group agrees on the intent — what outcome are we trying to produce? 4. Surface any hard constraints: compliance requirements, existing integrations, deadline pressure

Output: A clear, shared understanding of the problem and desired outcome. Not a document — a conversation. If it's a new CSB, the BSA will have already written the ICE Spec (Intent, Context, Evaluation). Discuss is where that ICE Spec gets pressure-tested with the people who'll build it.

Time budget: 30 minutes to 2 hours. If Discuss takes longer than 2 hours, the scope is too big — split it.

The bar for "done": - Every person in the room can articulate the problem in one sentence - The TSA knows enough to start planning - There are no open questions that would change the approach

2.2 Plan

Purpose: Define the approach, scope, and acceptance criteria. This is where the TSA decides how to build it.

Who's responsible: The TSA (STO for the BC or TSB). The BSA is available for questions but doesn't drive the plan.

What a good plan looks like:

A plan is not a spec. It's a short list of decisions:

  • Approach: How will this be built? Which parts of the platform are involved?
  • Scope: What's in this BC and what's explicitly out? (The 3-day cap forces this.)
  • Acceptance criteria: What does "done" look like? Be specific. "The customer can generate a quarterly revenue report filtered by region" — not "build reporting."
  • Known risks: What might go wrong? What don't we know yet?

How detailed? Detailed enough that the TSA could hand it to another TSA and they'd know what to build. In practice, that's usually a half page to a full page. If the plan is longer than a page, the BC scope is too big.

Where it lives: Wherever the team tracks BCs — Jira, a shared doc, the TSB thread. The format matters less than the content.

Time budget: 1-4 hours. Plan includes any spike work or investigation needed to make decisions. If planning takes more than half a day, the BC is too big or the Discuss phase missed something.

The bar for "done": - Acceptance criteria are written and the BSA agrees they match the customer need - The TSA is confident the work fits in 3 days - No blocking unknowns remain

2.3 Execute

Purpose: Build the thing.

Who's responsible: The TSA, working with agentic tooling and (when needed) other ICs.

What happens:

The TSA writes code, configures the platform, builds integrations — whatever the plan calls for. This is the phase where agentic tooling changes the game most dramatically.

How agentic tooling changes execution

In the old model, a developer wrote every line of code, manually searched for relevant examples, context-switched between documentation and IDE, and spent significant time on boilerplate, tests, and repetitive patterns.

In DPEV with agentic tooling:

Activity Before (manual) After (agentic)
Writing boilerplate code Developer types it Agent generates from patterns
Searching platform codebase grep, IDE search, asking colleagues Agent retrieves relevant code, docs, prior implementations
Writing tests Developer writes after feature Agent generates alongside feature, TSA reviews and adjusts
Documentation Written after the fact (often skipped) Agent drafts during build, TSA reviews
Debugging Manual log reading, breakpoints Agent analyzes stack traces, suggests fixes
Code review prep Developer re-reads own code Agent flags potential issues before review
Configuration Manual lookup of options and schemas Agent pulls customer context, suggests config

Pair programming with AI agents

The TSA isn't "using a tool." They're working with a collaborator that has instant access to the entire codebase, every customer configuration, and the platform documentation. The TSA brings judgment, domain knowledge, and the ability to say "no, that approach won't work because..." The agent brings speed, recall, and pattern recognition across the full platform.

This is why a single TSA with agentic tooling can do in 3 days what used to take a team of five a two-week sprint. The bottleneck shifts from "how many people can type code" to "how fast can one person make good decisions."

Time budget: 1-2.5 days of the 3-day BC. Execute is where most of the time goes, but it shouldn't consume the entire cycle.

The bar for "done": - All acceptance criteria from Plan are met - Code is committed and deployable - The TSA believes it's ready for Verify

2.4 Verify

Purpose: Confirm the build works, meets acceptance criteria, and is ready to ship. This is not a separate QA phase. It's the final phase of the same cycle, done by the same people.

Who's involved: - The TSA (STO) — validates technical correctness - The BSA — validates it solves the customer problem - Agentic tooling — runs automated tests, checks for regressions

What happens:

  1. Automated verification: Test suites run. The agent flags any failures, regressions, or coverage gaps.
  2. TSA review: The TSA walks through the acceptance criteria one by one. Does it work? Does it handle edge cases?
  3. BSA validation: The BSA confirms the solution matches the customer need. For customer-driven work, this is the most important check — does this actually solve what the customer asked for?
  4. Ship decision: If it passes, ship it. Deploy to production. No staging environment limbo. No release calendar. Ship.

What Verify is NOT: - Not a separate QA team testing someone else's code - Not a UAT phase where stakeholders kick tires for a week - Not a release ceremony with sign-offs from five people - Not optional ("we'll test it next sprint")

Time budget: 2-4 hours. If Verify takes more than half a day, either the build has real problems (stop, fix them) or the verification process is bloated (simplify it).

The bar for "done": - All acceptance criteria pass - Automated tests pass - BSA confirms it's right - It's deployed to production


3. The 3-Day Build Cycle (BC)

The Build Cycle is the atomic unit of work in DPEV. Every BC targets a maximum of 3 calendar days from Discuss to ship.

Why 3 days?

  • Forces scope control. If it doesn't fit in 3 days, break it down further. This eliminates scope creep by design.
  • Limits downside risk. The worst case is 3 days of wasted effort. You can absorb that. You can't absorb 3 months.
  • Creates rhythm. Teams (BSA + TSA pairs) develop a cadence. Start Monday, ship Wednesday. Start Thursday, ship Monday. Predictable, fast, repeatable.
  • Keeps information fresh. The Discuss conversation is still in everyone's head when Verify happens. No "what were the requirements again?" moments.

Typical BC cadence

Day 1 (Morning)     Discuss — align on the problem (30 min - 2 hrs)
Day 1 (Afternoon)   Plan — define approach and acceptance criteria (1-4 hrs)
Day 1-3             Execute — build with agentic tooling (1-2.5 days)
Day 3               Verify — test, validate, ship (2-4 hrs)

The exact breakdown varies. A well-understood change might spend 30 minutes in Discuss and Plan, then 2.5 days executing and verifying. A complex integration might spend half a day in Discuss and Plan, then 2 days executing, then half a day verifying.

The point is not rigid time allocation. The point is that the entire cycle completes in 3 days or less.

What if it won't fit in 3 days?

Break it into multiple BCs. Every time. No exceptions.

A BC that "just needs one more day" is a symptom of poor scope control in Plan. The TSA should have caught this and split the work.

If a BC is genuinely blocked (waiting on a third-party API, an infrastructure change, customer input), pause it and start the next BC. Don't let one blocked BC paralyze the whole TSB.


4. How DPEV Scales

DPEV operates at the BC level, but the same rhythm propagates up through the work hierarchy.

4.1 Build Cycle (BC) — 3 days max

Scope: A single focused deliverable. One TSA, one BSA as customer proxy.

DPEV → Ship → Complete

Examples: - Add a new column to the payment reconciliation report - Build an API endpoint for a customer integration - Configure a new payment method for a client - Fix a performance bottleneck in the compliance check

4.2 Technical Solution Build (TSB) — 3 weeks max

Scope: A sequence of BCs building toward a complete technical solution. One TSA owns it as STO.

BC₁ (3d) → BC₂ (3d) → BC₃ (3d) → ... → TSB Complete

A TSB is the full technical solution — the thing the BSA's CSB depends on. The 3-week cap (roughly 5-7 BCs) prevents open-ended technical work. If a TSB can't be done in 3 weeks, the scope is wrong.

Each BC within a TSB ships independently. BC₁ might deploy a database migration. BC₂ adds the API layer. BC₃ builds the UI. Each one goes to production. Each one follows DPEV.

4.3 Customer Solution Build (CSB) — BSA coordinates

Scope: A complete solution for a customer's specific need. The BSA owns this as STO.

A CSB may include: - One or more TSBs (platform or service extensions) - Configuration of existing platform capabilities - Customer training or documentation - Worksuite Services handoff (PayOps setup, compliance config)

CSB
 ├── TSB₁ (new API endpoints)       → BC₁ → BC₂ → BC₃
 ├── TSB₂ (reporting module)        → BC₁ → BC₂
 ├── Platform configuration         → BC₁
 └── Customer onboarding            → BC₁

The BSA doesn't manage the TSBs. They coordinate timing and validate that each TSB's output actually serves the customer need. The TSA owns the technical decisions within each TSB.

4.4 Platform Solution Build (PSB) — TSA-initiated

Scope: A core platform extension or cross-module feature set. Multiple TSAs may collaborate.

A PSB starts when a TSA recognizes a pattern: "I've seen this same need across 3 different CSBs. We should build it into the platform."

PSB (new platform module)
 ├── TSA₁ owns TSB-A (data layer)     → BC₁ → BC₂ → BC₃
 ├── TSA₂ owns TSB-B (API layer)      → BC₁ → BC₂
 └── TSA₃ owns TSB-C (integration)    → BC₁ → BC₂ → BC₃

PSBs are the mechanism for platform investment. They're TSA-driven, not product-driven. The TSAs who work closest to the platform see the patterns and make the investment decisions.

If there's a dispute about whether a PSB is worth the investment, it goes through D3 (Discuss, Debate, Decide). If D3 doesn't resolve it, the CSA makes the call.


5. What DPEV Killed

Old ceremony What replaced it Why it died
Sprint planning Just start a BC. The TSA and BSA align in Discuss, and the TSA starts building. No two-week planning horizon. No story points. No velocity tracking. Sprint planning existed to coordinate large teams with shared resources. A TSA with agentic tooling doesn't need coordination — they need clarity and time.
Backlog grooming Eliminated entirely. There is no backlog. Backlogs are where work goes to die. If something is important enough to build, it goes into a BC now. If it's not important enough for now, it's not important enough to track. BSAs maintain their CAP (Customer Account Plan) for strategic context, but there's no queue of prioritized tickets waiting for capacity.
PRDs (Product Requirements Documents) ICE Spec, written by the BSA. Intent, Context, Evaluation — one page max. PRDs were an artifact of information handoff between product and engineering. The BSA writes the ICE Spec for the person who'll build it (the TSA), and they discuss it face-to-face. No translation layer.
Separate QA phase Verify is the final phase of every BC. Same people, same cycle. QA was a separate phase because developers and testers were different people on different timelines. In DPEV, the TSA verifies their own work (with agentic test generation), and the BSA validates it meets the customer need. Done in hours, not weeks.
Release ceremonies Ship at the end of every BC. No staging limbo. No release train. Release ceremonies existed to manage risk across large, monolithic deployments. 3-day BCs produce small, focused changes. The risk surface is small enough that ship-on-complete is safer than batch-and-release.
Status update meetings Eliminated. Work is visible in the BC and TSB artifacts. If you need a meeting to know what's happening, your tools are wrong. Agentic dashboards surface status in real time. The STO updates the BC status. Anyone can see it.
Sprint retrospectives Replaced by continuous improvement within each BC. If something went wrong, you see it in Verify and fix it in the next BC — three days later, not two weeks later. Retros were a batch feedback mechanism. When your cycle is 3 days, the feedback loop is continuous.

6. Anti-Patterns (What DPEV Is NOT)

"We're doing DPEV" but it takes two weeks

If a BC takes more than 3 days, you're not doing DPEV. You're doing sprints with different names. Split the work. The 3-day cap is not a guideline — it's the whole point.

Discuss that's actually a requirements gathering session

Discuss is alignment, not documentation. If the BSA is interviewing the TSA about what they need, the ICE Spec isn't ready. The BSA should come to Discuss with the what and why already clear. Discuss is for shared understanding, not discovery.

Plan that turns into a design document

A plan is a half-page to a full page. Approach, scope, acceptance criteria, known risks. If the plan has architecture diagrams with swim lanes, it's a design document and you've lost the plot. Keep it tight enough that the TSA can hold it in their head.

Execute without a plan

"I'll figure it out as I go" is not DPEV. Execute without a plan produces aimless coding that doesn't ship in 3 days. The plan doesn't need to be perfect, but it needs to exist.

Verify as a rubber stamp

If Verify always passes on the first try and takes 15 minutes, you're not actually verifying. Real Verify catches things: edge cases the TSA missed, acceptance criteria the BSA realizes need adjustment, test failures that need fixing. If it's a rubber stamp, it's theater.

Skipping Verify because "we're out of time"

The BC is 3 days including Verify. If Execute consumed all 3 days, the scope was too big. Never skip Verify to make a deadline. Ship the verified subset and start a new BC for the rest.

Using DPEV for exploration or research

DPEV is for building shippable artifacts. If you need to explore a technology, investigate an approach, or prototype something with uncertain outcomes, that's a spike. Spikes are useful, but call them what they are and timebox them separately. Don't pretend a spike is a BC.

Backlog in disguise

If there's a Jira board with 50 tickets labeled "future BCs," that's a backlog. Kill it. The BSA's CAP captures customer strategy. The TSA's experience captures platform patterns. Work enters a BC when someone decides to build it — not when someone decides to write it down.


7. Worked Examples

Example 1: A customer needs a new report type

Situation: Acme Corp's BSA gets a request: "We need a quarterly revenue report broken down by region. Our current reporting only shows totals."

Day 0: BSA writes ICE Spec

ICE Content
Intent Acme Corp needs regional revenue breakdowns for quarterly planning
Context Current reporting aggregates all revenue. Data is already stored by region in the payment records. Acme's finance team reviews these reports in the first week of each quarter. Next quarter starts in 3 weeks.
Evaluation Acme's finance lead can generate a Q1 report filtered by any of their 4 regions, matching their existing revenue totals ±$0.01

Day 1 morning: Discuss (45 minutes)

BSA and TSA review the ICE Spec. TSA asks: - "Is this just the UI, or do they need an API endpoint too?" → BSA: "Just UI for now." - "Are the regions configurable or hardcoded?" → BSA: "Hardcoded to their 4 regions is fine for now." - "Do they need export to CSV?" → BSA: "Yes, they paste into their own spreadsheets."

Outcome: Clear problem, clear scope, no open questions.

Day 1 afternoon: Plan (2 hours)

TSA writes the plan: - Approach: Add region filter to existing quarterly report component. Data layer already supports region — just need to pass the filter parameter and add the UI selector. - Scope: Region filter + CSV export on quarterly revenue report. Not configurable regions (hardcoded to Acme's 4). Not applicable to other report types this BC. - Acceptance criteria: 1. Dropdown with 4 regions + "All Regions" option on quarterly report page 2. Report data filters correctly by selected region 3. Totals match un-filtered report when "All Regions" is selected 4. CSV export includes region column and respects current filter - Risks: None significant. Data layer is proven.

Day 1-3: Execute (1.5 days)

TSA works with agentic tooling: - Agent pulls the existing report component code and data layer - TSA and agent build the filter parameter into the data query - Agent generates the UI dropdown component from the existing design system - Agent writes unit tests for the filter logic and CSV export - TSA reviews generated code, adjusts edge case handling for empty regions - Code committed, PR opened

Day 3: Verify (3 hours)

  • Automated tests pass
  • TSA walks through all 4 acceptance criteria manually
  • BSA joins, confirms the report matches what Acme asked for
  • BSA clicks through each region, checks totals, exports CSV
  • Ship to production

Elapsed time: 3 days. Customer has their report. BSA notifies Acme's finance lead.


Example 2: A TSA notices a pattern and initiates a PSB

Situation: TSA Alex has built regional report filters for 3 different customers across 3 separate CSBs over the past month. Each time, it was a custom implementation because the platform's reporting module doesn't natively support configurable filters.

Recognition: "I keep building the same thing. The platform should support configurable report filters natively."

Step 1: TSA initiates PSB

Alex writes up the PSB proposal: - What: Add a configurable filter framework to the reporting module. Any report can define filter dimensions (region, department, date range, custom fields). Filters compose with each other. Export respects active filters. - Why: Built this 3 times as custom work. At least 2 more customers have similar needs queued in their BSAs' CAPs. Platform-native solution means future BSAs can configure it without TSA involvement. - Estimated scope: 2 TSBs, roughly 4-5 BCs total.

Step 2: D3 with other TSAs

Alex presents to the TSA group. Discussion: - TSA Beth: "I was about to build this for another customer. I'm in." - TSA Chris: "The data layer needs a new indexing approach for this to perform at scale. I can own that TSB." - No objections. Consensus reached through D3.

Step 3: PSB structured as TSBs

PSB: Configurable Report Filters
├── TSB-1: Filter Framework Core (TSA Chris — data layer + API)
│   ├── BC₁: Schema changes + filter index (3 days)
│   ├── BC₂: Filter API endpoints (3 days)
│   └── BC₃: Composite filter logic + performance testing (3 days)
└── TSB-2: Filter UI + Report Integration (TSA Alex — frontend)
    ├── BC₁: Generic filter component (3 days)
    └── BC₂: Integrate with existing report types + migration for 3 customers (3 days)

Step 4: BCs follow DPEV

Each BC runs the full DPEV cycle. TSB-1 BC₁ ships a database migration. TSB-1 BC₂ ships working API endpoints. TSB-2 BC₁ ships a reusable UI component. Each deployment is independent. Each one goes to production.

Step 5: Ship and leverage

After ~3 weeks (5 BCs across 2 TSAs), the platform has configurable report filters. The 3 existing customer implementations get migrated to the native module. Future BSAs can offer filtered reporting by configuring the module — no TSA needed for standard cases.

The flywheel: Custom work → pattern recognition → platform investment → faster custom work next time.


8. How Agentic Tooling Accelerates Each Phase

Agentic tooling isn't bolted onto DPEV — it's baked into every phase. Here's how agents compress work across the entire cycle:

Discuss

Activity Agent role
Pulling customer context Agent retrieves customer data, prior solutions, open issues, and relevant history before the Discuss meeting starts
Surfacing prior art Agent finds existing platform capabilities or past BCs that addressed similar needs. "We built something like this for Customer X in BC-847."
Capturing decisions Agent documents the Discuss outcome in real time — no one needs to take notes

Plan

Activity Agent role
Drafting acceptance criteria Agent proposes criteria based on the ICE Spec and Discuss outcome. TSA reviews and adjusts.
Estimating scope Agent assesses codebase complexity for the planned approach. "The data layer supports this natively. The UI component needs a new prop but it's a 50-line change."
Identifying risks Agent flags dependencies, recent changes to the same code, and known issues in related modules
Checking for prior implementations Agent searches across all BCs and TSBs for similar work. Prevents rebuilding what already exists.

Execute

Activity Agent role
Code generation Agent writes boilerplate, standard patterns, and repetitive code. TSA focuses on novel logic and judgment calls.
Test generation Agent writes unit tests, integration tests, and edge case coverage alongside the feature code
Code review prep Agent reviews the TSA's code for style violations, potential bugs, and security concerns before human review
Documentation Agent drafts technical docs, API docs, and configuration guides as the code is written
Context switching Agent holds the full context of the BC — no need for the TSA to re-read files or search for functions. "The method you need is in payment_service.py line 234."

Verify

Activity Agent role
Running test suites Agent triggers and monitors automated tests, surfaces failures with root cause analysis
Regression detection Agent compares behavior against baseline, flags unexpected changes in unrelated functionality
Acceptance criteria validation Agent walks through each criterion programmatically where possible, reports results
Deployment Agent handles the deployment pipeline, monitors for errors post-deploy

The multiplier effect

Each phase gets compressed. But the real acceleration is compounding: - Better context in Discuss → fewer wrong turns in Plan - Better risk identification in Plan → fewer surprises in Execute - Faster coding in Execute → more time for thorough Verify - Faster Verify → ship sooner → start the next BC sooner

A BC that would have taken 5 days without agents fits in 3 days with them. Over a 3-week TSB, that's the difference between 3 BCs and 5 BCs. Over a quarter, it's the difference between shipping 15 features and shipping 25.

Speed is the competitive advantage. Agentic tooling is what makes the 3-day cycle realistic.


Quick Reference

DPEV at a glance

Phase Purpose Time budget Output
Discuss Align on what and why 30 min – 2 hrs Shared understanding
Plan Define how, scope, done criteria 1 – 4 hrs Plan (half page to 1 page)
Execute Build with agentic tooling 1 – 2.5 days Shippable code
Verify Test, validate, ship 2 – 4 hrs Deployed to production

Time caps

Level Max duration Owner
BC 3 days TSA (STO)
TSB 3 weeks TSA (STO)
CSB Varies BSA (STO)
PSB Varies TSA(s)

Decision rules

  • It doesn't fit in 3 days? Split it into multiple BCs.
  • The plan is more than a page? The scope is too big.
  • Discuss is over 2 hours? The problem isn't clear enough — the BSA needs a sharper ICE Spec.
  • Verify found problems? Fix in this BC if it's small. Start a new BC if it's not.
  • Blocked on an external dependency? Pause, start the next BC.
  • Same custom work built 3 times? Initiate a PSB.

Document maintained by Red. Companion to org/awe-org-design.md.