Monday morning. Six tender alerts in your inbox. Two forwarded by a colleague with “could this be relevant for us?” A TED notification. Three emails from donor portals.
You have a team of four. Each serious submission costs 150 to 300 hours. You can’t open every one of them.
In our previous article, we broke down the full 8-criteria framework for a structured bid/no-bid decision. That’s the deep analysis: 45 minutes, weighted scoring, documented decision. But when six tenders land at the same time, you don’t have 45 minutes per dossier. You need a quick filter to figure out which ones deserve that full analysis.
Here are 5 signals. 20 minutes per tender. The goal: weed out the false positives before they consume your team.
1) You can match it to 2 delivered references. Without thinking.
First filter, and the most unforgiving: does this tender fall within your proven delivery zone?
Not your area of interest. Not your area of ambition. Your area of evidence.
Country, sector, type of intervention, project size. If you have to step outside your demonstrated track record, the real cost skyrockets: partners sourced in a rush, methodologies adapted on the fly, experts recruited outside your network. Every gap from your proven ground adds execution risk and lowers your odds against a competitor who checks every box.
The test: can you name 2 similar missions you’ve delivered, in under 30 seconds? If yes, keep going. If you have to think about it, that’s already a weak signal.
2) The ToR is serious. Not a sloppy copy-paste job.
This signal is underestimated. Yet it reveals a great deal.
A well-written ToR—with explicit evaluation criteria, clear weightings, a defined scope, and precise deliverables—signals a donor that knows what it wants. You can build a surgical response strategy.
A vague, contradictory ToR with catch-all objectives and fuzzy evaluation criteria? That’s the sign of a poorly framed process on the donor’s side. And a poorly framed process produces unpredictable evaluations. You could write the best proposal in the lot and still lose on a criterion no one truly controls.
We’ve seen teams invest 200 hours on a tender whose ToR was three pages long with no weighting. The result: the evaluation hinged on implicit preferences that nothing in the dossier hinted at. 200 hours. Zero learning. Zero recourse.
The test: read the scoring grid. If it doesn’t exist or is too vague to interpret, lower your score by one notch. If it’s precise and your strengths align with the highest-weighted criteria, raise it by one.
3) The financial equation holds up—not just the headline budget.
Many teams get caught here. The advertised budget looks attractive. But the contract’s economic mechanics tell a different story.
Four checks in 5 minutes:
- Are direct costs covered by the budget, or will you have to compress your rates?
- Is the allowed indirect cost rate compatible with your structure (watch out for 7–10% ceilings)?
- Are payment terms sustainable (sufficient advance, realistic milestones, manageable retention fees)?
- Is the compliance burden proportionate, or will it eat into the margin?
The test: do a back-of-the-envelope calculation. If the likely net margin falls below your break-even threshold, it’s a “no” disguised as an opportunity.
4) You can submit an excellent proposal. Not just an “acceptable” one.
An acceptable proposal doesn’t win. An excellent one does. The difference almost always comes down to whether the right resources are available at the time of submission.
Look at this without rose-tinted glasses:
- Is your proposal manager free, or already stretched across 2 other bids?
- Are the CVs of key experts up to date and ready to go, or will you be patching them together?
- Are critical annexes (references, audits, compliance evidence) accessible within 48 hours?
- Does the submission deadline allow for a proper technical review, or will you be sending a first draft?
We all know the temptation: “we’ll figure it out, it’ll be fine.” It won’t. Evaluators can tell the difference between a proposal that was built and one that was thrown together under pressure. The latter loses.
The test: if more than 2 key resources are unavailable or if the deadline forces you to skip internal review, that’s a strong signal your submission will fall below your own standard.
5) You have an edge. Not just a presence.
Last signal, and the most uncomfortable one: your competitive position.
Who else is likely to respond? Is the incumbent firmly in place? Are better-connected local consortia already in the running? Do you have a track record with this donor, or are you starting from scratch?
The gap between “we’re capable” and “we’re the frontrunner” is enormous in terms of win probability. A team that’s honest with itself knows the difference.
The test: can you articulate in one sentence what sets you apart from the 3 most likely competitors? If that sentence exists and is grounded in facts—references, rare expertise, an exclusive partnership, local presence—it’s a strong signal. If it’s “we’re good and motivated,” that’s not an edge.
The quick scoring method
Rate each signal from 0 to 5:
- 0 = deal-breaker
- 3 = mixed, workable under conditions
- 5 = favorable
Then decide:
| Total Score | Decision |
|---|---|
| 20–25 | High priority. Launch the full analysis (8-criteria framework). |
| 13–19 | Borderline. Conditional go with a clear mitigation plan. |
| 0–12 | No-go. Save your resources for a stronger opportunity. |
This score isn’t the final decision. It’s the filter that keeps you from spending 45 minutes on a deep analysis for a dossier that couldn’t even pass the 20-minute test.
How ICOpedia speeds up this first screening
The 8-criteria framework from the previous article is the structured decision. This 5-signal filter is the pipeline triage.
ICOpedia steps in precisely at this stage: when 10 tenders arrive and you need to figure out within minutes which ones deserve your attention.
The platform aggregates opportunities, extracts the key elements from each dossier (budget, eligibility, required documents, deadlines) and lets you read a ToR by asking questions in natural language instead of scrolling through 80-page PDFs.
The result: your 20-minute filter becomes a 5-minute filter. And your team moves straight to the deep analysis on the dossiers that actually deserve it.
Put it to the test this week
Take your next 10 incoming tenders. Run each one through the 5 signals. Time yourself.
Then measure over 30 days:
- Average time per dossier at the screening stage.
- Number of dossiers eliminated before mobilizing the full team.
- Win rate on the proposals you actually submitted.
Want the full framework for the next step?
Read the article: 8 criteria to structure your bid/no-bid decision
Want to automate the screening?
Try ICOpedia. Less noise, sharper decisions.
