How to Run a Forecast Call That Actually Improves Accuracy
How to Run a Forecast Call That Actually Improves Accuracy
The average B2B sales forecast is off by 25-40%. Most of that error traces back not to prediction skill, but to how forecast calls are run — or more precisely, how they aren't. A forecast call should be a structured conversation that builds a shared, evidence-based view of what's likely to close. Instead, it often devolves into reps giving optimistic narratives, managers accepting gut feel, and nobody updating the CRM beforehand.
A well-structured forecast call isn't about interrogating reps. It's about building accountability, surfacing risk early, and turning weekly data into coordinated action.
Why Forecast Calls Go Wrong
Before diving into the solution, let's be clear about what breaks most forecast calls:
Reps Give Optimistic Narratives Instead of Evidence
A rep walks into the call and says, "I feel good about the Acme deal — I think it closes this month." That's a narrative. It could be backed by months of engagement, or it could be hope. Managers often have no way to distinguish between the two, so they nod and write it in. Six weeks later, the deal hasn't moved and nobody's sure why.
No Consistent Criteria for Forecast Categories
One rep thinks "commit" means "I've spoken to the buyer." Another thinks it means "the contract is signed." Without a shared definition, your forecast becomes a collection of individual interpretations rather than a unified forecast. That destroys any ability to compare one rep's forecast accuracy to another's or to improve the process over time.
The Call Becomes an Interrogation
Managers who sense optimistic bias often respond by drilling down with aggressive questions. "Are you sure about that close date? How do you know they'll sign?" This approach might surface a few problems, but it also makes reps defensive. They learn to oversell confidence rather than share candid assessments of deal health. You get worse data, not better.
Data Is Stale
Nobody updates the CRM before the call. The manager is looking at last week's pipeline snapshot while the rep is talking about conversations that happened two days ago but haven't been logged. This mismatch means decisions are made on incomplete information. Stage changes, risk flags, and activity get recorded after the call (if at all), which defeats the purpose of the meeting.
Close Dates Slip Without Explanation
A deal was in commit last month with a target close of Q1. Now it's in best case with a Q2 target. The manager asks why, and the rep says, "We got delayed in procurement." But that should have been visible weeks ago. The forecast call missed it because nobody reviewed deal progression — they just reviewed snapshot numbers.
The Forecast Call Framework
A well-designed forecast call operates in three phases: preparation, the meeting itself, and follow-through. Each phase has distinct goals and a measurable output.
Before the Call: Preparation (15 Minutes)
The call's quality is set before anyone speaks. Here's what you need prepared:
- Pull current pipeline by forecast category. Separate deals into commit, best case, and pipeline. Note the stage of each deal and the close date.
- Calculate coverage ratio against target. Aim for 3-3.5x coverage in best case. If you're at 2x, you're underfunded and need to talk about upside or pipeline generation.
- Flag deals where close dates have slipped more than once. A deal that moved from Q1 to Q2 to Q3 suggests either a qualification issue or deal stall that needs to be discussed.
- Identify deals with no activity in the last 7 days. No calls, no emails, no meetings. If it's in commit and hasn't moved in a week, that's a risk flag.
- Note any deals that moved backward in stage. A deal that dropped from evaluation to discovery is a signal of buyer disengagement. That needs discussing.
Send this summary to the rep 24 hours before the call. Give them time to think about their deals with fresh eyes. Don't make the call feel like a surprise audit.
During the Call: The Three-Pass Method (30 Minutes)
Structure the call into three passes. Each pass focuses on a different forecast category and has a specific question that drives the discussion.
Pass 1: Commit Deals (10 Minutes)
Commit is where deals should close. So every commit deal gets one question: What evidence supports this closing on time?
Not "do you think it will close?" — that's a yes/no. Ask for evidence. Has the buyer signed a statement of work? Is legal in final review? Is there a signed MSA and are you negotiating terms? Has the buyer set this up as a board decision?
The answer should be specific. "We're in final contract review with the CFO" is evidence. "I spoke to them last week and they seemed positive" is not.
Then challenge gently: "Has the buyer explicitly confirmed the close date? Is the paper process really on track, or are we expecting it to be?" If a deal can't survive that question, it probably shouldn't be in commit.
Decision for each deal: Does it stay in commit, move to best case, or get flagged as at risk?
Pass 2: Best Case Deals (10 Minutes)
Best case deals have good engagement but something is unclear: the timeline, a final approval, or a detail of the buying process. The question here is: What needs to happen to move this to commit?
This isn't "what do you hope happens?" It's a specific, testable event. Maybe it's "the VP has to review the business case and sign off" or "we need to win a competitive evaluation" or "they're waiting for budget to be released in April."
For each deal, establish: What's the specific milestone? When realistically will it happen (buyer-validated, not rep-estimated)? If we don't see it by [date], what's our fallback plan?
This forces reps to think in terms of buyer milestones rather than aspirations. It also gives you early warning signs. If a milestone doesn't hit by the target date, that's your signal to escalate or move the deal to a lower forecast category.
Pass 3: Upside and Risk (10 Minutes)
Review upside deals that could pull into the period. "Could this close early? What would it take?"
Then review at-risk deals — deals that moved backward, haven't had activity, or have a slipping close date. For each one: "What's the recovery plan? What's our confidence level?" Be honest. If a deal is truly at risk of being lost, say it now, not at quarter-end.
Aggregate the numbers: What's your realistic forecast for the period (best case) vs. your target? What's the variance, and where does it come from?
After the Call: Follow-Through
The forecast call is only valuable if it drives action. Here's what happens next:
- Publish the forecast number with assumptions documented. Write down: how many deals in commit (with close dates), how many in best case (with the milestone that unlocks commit), coverage ratio, and any risks flagged. This becomes your baseline.
- Assign specific actions for deals that need movement. "We need legal to review by Friday" or "Next call is with the CFO to cover ROI." Assign ownership. Track completion.
- Schedule mid-week check-ins on commit deals. Don't wait until next week's full forecast call. On Wednesday, a quick async check: "Anything changed on the Acme deal? Any blockers?" This keeps commit deals from slipping silently.
- Update the CRM with any changes discussed. Stage changes, close date adjustments, new contacts identified — all of it goes into the system immediately. Next week's forecast call works from fresh data.
Forecast Categories: Getting the Definitions Right
Your forecast categories are only as good as their definition. If reps interpret them differently, your forecast falls apart. Here's a framework that works:
- Commit: Verbal agreement is in place, paper process has been initiated, buying process is on track, and the deal is expected to close in the current period. Confidence > 90%. Deals in commit should have evidence: a signed statement of work, an active contract in legal review, a board decision in place, or equivalent.
- Best Case: Strong engagement from the buyer, decision criteria are understood and met, but either the timeline or a final approval is not yet confirmed. 50-75% confidence. Best case deals should have a clear next milestone that, if achieved, moves them to commit.
- Pipeline: Qualified opportunity, but early in the buying cycle. Multiple steps remain before a close is realistic. 20-50% confidence. These shouldn't factor into your forecast — they're fuel for future periods.
- Omit/Slip: Deal will not close in the current period. Be honest. A deal that slips is not a loss of forecast; it's a data correction. Hiding slips destroys forecast accuracy and prevents you from learning.
The Role of Qualification Frameworks in Forecasting
Here's where qualification frameworks like MEDDPIC or BANT connect to your forecast. A deal can be categorised as "commit" because the timeline looks good and you've had positive conversations. But if your MEDDPIC score is below 60%, you're missing critical information about decision criteria, pain quantification, or the decision process itself. That missing information is risk.
The best forecast calls layer in framework scores. A deal in commit with low MEDDPIC or BANT scores deserves extra scrutiny. It means you have confidence in the timeline but not in your understanding of why they need to buy. That's backwards. A truly qualified commit deal should have both timeline confidence AND framework depth.
BANT is particularly useful here because it maps directly to forecast risk. Low Budget confidence = deal risk. Unclear Authority = deal risk. If the buyer hasn't confirmed Need = deal risk. If the Timeline is vague = deal risk. Run BANT on your commit deals. If any component is weak, downgrade or get more evidence.
What This Looks Like in Practice
Summit53 brings this framework into daily execution. Instead of reviewing a CRM snapshot in a spreadsheet, managers can see their forecast calls operationalised in real time.
The Forecast Confidence view shows every deal with:
- Forecast category (commit, best case, pipeline)
- Close date with confidence bands
- Framework score (MEDDPIC, BANT, or SPICED) overlaid on the forecast category
- Health indicators: days since last activity, stage progression, contact freshness
- Risk flags: deals with slipping close dates, no activity, or low framework scores despite high forecast placement
When you run your forecast call with Summit53, you're not debating intuition. You're reviewing evidence. A rep says a deal is commit. The system shows: close date Q1 (solid evidence), MEDDPIC score 73 (strong), last activity 3 days ago, and decision process confirmed. That deal stays in commit. Another deal: commit categorised, but MEDDPIC score 52, no activity in 10 days, close date is "sometime this quarter." That's a downgrade conversation.
The Unified Forecast Summary aggregates this across your team. You see not just total commit and best case, but how that breaks down by rep, by stage, by risk level. You see which reps have deals with strong framework scores vs. reps whose deals are high-forecast but low-qualification. Over time, you identify patterns. Maybe your team's MEDDPIC scores are strong but your BANT scores are weak — that suggests you're qualifying need and timeline well but missing authority mapping. That's actionable intelligence.
Mid-week, the Weekly Action Plans feature flags which commit deals need attention based on activity patterns. No activity in 7 days? That deal gets a priority action: "Reconnect with champion on timeline and any blockers." This keeps commit deals from slipping silently between forecast calls.
Five Rules for Better Forecast Calls
Distill your forecast call discipline into five non-negotiable rules:
- No deal stays in commit without evidence. Written evidence. A close-out checklist completed, a contract in legal, a board decision in place. Verbal is not evidence. If the rep can't articulate what evidence backs a commit categorisation, it moves to best case.
- Close dates must be buyer-validated, not rep-estimated. "I think they'll close in March" is an estimate. "They told us they need to sign by March 15 because of their fiscal calendar" is validated. One guides your forecast; the other is a guess.
- Review last week's forecast assumptions — what changed and why? Compare this week's forecast to last week's. Deals that moved between categories, close dates that shifted, new deals added to commit. Understand the reasons. Look for patterns. If deals consistently slip by 30 days, your reps are optimistic by 30 days — account for that bias.
- Keep the call to 30 minutes. A longer call means unfocused reviews. If you can't cover your deals rigorously in 30 minutes, you have too many deals to properly forecast, or you haven't prepared adequately. Shorter calls force discipline in both preparation and discussion.
- Track forecast accuracy over time to improve the process itself. After each quarter, compare your forecast to actual results. Which forecast categories were most accurate? Which reps? Which stages? Which framework scores predicted success? Use that data to tighten your definitions and coaching. Forecasting is a skill that improves with feedback.
Connecting Forecast Accuracy to Revenue Execution
Forecasting isn't separate from deal execution. In fact, it's the lens through which you see execution. A forecast call that surfaces risk early gives you time to intervene. A forecast call that documents evidence gives you something to coach against. A forecast call that connects to weekly action plans turns forecast insights into rep behaviours.
High-performing teams don't just forecast better — they execute better, because they use the forecast call to align on what winning looks like and what blockers need to be removed now, not at quarter-end.
If you want to improve forecast accuracy, don't change your prediction model. Change how you run your forecast calls. Make them evidence-based. Make them disciplined. Make them part of your weekly execution rhythm. The accuracy will follow.
And if you want to operationalise that discipline at scale, connect your forecast calls to deeper frameworks and activity patterns. Review how leading teams are transforming sales forecasting with integrated qualification and pipeline intelligence, or explore how revenue metrics connect to coordinated execution.
For a practical example of weekly forecast discipline in action, see how Weekly Action Plans bring forecast calls into daily execution.
Ready to Run Better Forecast Calls?
If your forecast accuracy is suffering, the answer isn't a better prediction formula. It's a better forecast process. Start with the three-pass method. Add framework scoring. Track accuracy over time. And if you're ready to operationalise forecast discipline across your team, let's talk about how Summit53 can help.