Not sufficiently justified in EU proposals: why evaluators need evidence, not more words
“Not sufficiently justified” is one of the most common and frustrating comments in EU proposal evaluations. It rarely means that the proposal needed more text. It usually means that a claim lacked evidence, logic, and traceability.

Few comments in an Evaluation Summary Report are more frustrating than this one:
It feels vague.
It feels unfair.
It feels like the evaluator wanted more explanation, but did not explain what was missing.
For proposal writers, consultants, universities, research centres, startups, and innovation teams, this comment can be especially painful because it often appears next to a section that already felt detailed, ambitious, and carefully written.
But in most cases, “not sufficiently justified” does not mean “write more.”
It means something more specific.
It means:
That distinction matters.
Because adding more words to an unsupported claim does not make it stronger.
It only makes the weakness harder to find.
What “not sufficiently justified” really means in EU proposal evaluation
EU funding proposals are not evaluated on ambition alone.
They are evaluated on whether the proposal gives the evaluator enough evidence, logic, and traceability to believe the claims being made.
A sentence can sound impressive and still be weak.
A paragraph can be long and still be unjustified.
A claim can be strategically relevant and still fail because the proposal does not explain how the conclusion was reached.
When evaluators write “not sufficiently justified”, they are usually pointing to one of four problems.
1. A claim without evidence
This is the most common issue.
The proposal states that the project will improve performance, reduce cost, increase efficiency, accelerate adoption, or create major impact.
But it does not provide evidence.
For example:
That may be true.
But the evaluator still needs to understand:
- Which part of hospital efficiency?
- What is the current baseline?
- What evidence supports the expected improvement?
- Has this been tested, simulated, benchmarked, or validated?
- Under which conditions does the improvement apply?
Without that chain, the statement remains a claim.
Not a justified conclusion.
2. A number without derivation
Numbers are powerful in EU proposals.
They can make a proposal more concrete, credible, and evaluator-friendly.
But unsupported numbers are also dangerous.
A proposal may state:
That sounds precise.
But precision is not the same as credibility.
The evaluator will immediately ask:
- Why 25%?
- Compared to which baseline?
- Based on which dataset?
- Is it a pilot result, a simulation, a benchmark, or an assumption?
- What variables were included or excluded?
- Can the result be replicated in the target environment?
A number without derivation can damage trust because it suggests certainty without showing the reasoning behind it.
This is one of the reasons why market and impact claims often fail under evaluation pressure.
We discussed a similar issue in our proposal tip on exaggerated market assumptions: Does your project have a higher market share than Tesla?
The same principle applies here.
A number is only as strong as the logic behind it.
3. An assumption without boundary conditions
Many proposal claims depend on assumptions.
That is normal.
The problem is not using assumptions.
The problem is presenting assumptions as if they were facts.
For example:
This may be reasonable in some contexts.
But the evaluator needs to know the boundary conditions:
- Which type of hospitals?
- Which countries?
- Which IT systems?
- Which procurement environments?
- Which regulatory constraints?
- Which clinical workflows?
- Which implementation resources?
- Which integration requirements?
A justified assumption is transparent.
It tells the evaluator where the claim applies, where it may not apply, and what needs to be true for the expected result to materialise.
An unjustified assumption hides uncertainty.
Evaluators notice that.
4. An ambition without a credible pathway
EU proposals often fail because they describe an attractive destination without explaining the route.
For example:
That may sound compelling.
But it is not yet justified.
A credible pathway would explain:
- What will be transformed first
- Which workflow will be addressed
- Which users will adopt the solution
- Which evidence already supports adoption
- Which technical and operational barriers remain
- How pilots will validate the approach
- How results will scale beyond the first deployment
Impact needs a mechanism.
Scale needs a pathway.
Ambition needs proof.
Without these elements, the evaluator may not reject the vision, but they may conclude that it is not sufficiently justified.
More text is not the solution
When proposal teams receive comments like “not sufficiently justified”, the natural reaction is often to expand the explanation.
Add another paragraph.
Add stronger adjectives.
Add more benefits.
Add more context.
But this is often the wrong fix.
The evaluator is not asking for volume.
The evaluator is asking for:
- Evidence
- Logic
- Traceability
A proposal can become longer and still remain weak.
In fact, a longer unjustified claim can be worse because it increases evaluator fatigue without solving the underlying credibility gap.
Good proposal writing is not about making claims sound stronger.
It is about making claims easier to verify.
A simple test for every critical claim
Before submitting an EU funding proposal, every critical claim should survive three questions.
1. Based on what?
What evidence supports the statement?
This may include:
- Pilot data
- Prototype tests
- Retrospective analysis
- Scientific literature
- Customer validation
- Market research
- Benchmarking
- Regulatory data
- Technical simulations
- Internal experimental results
The key is not to include evidence randomly.
The key is to connect the evidence directly to the claim.
2. Compared to what?
A claim only becomes meaningful when the baseline is clear.
For example:
- Reduced cost compared to which current process?
- Faster diagnosis compared to which workflow?
- Lower emissions compared to which reference scenario?
- Higher efficiency compared to which state of the art?
- Better accuracy compared to which benchmark?
- Larger market share compared to which adoption curve?
Without a baseline, the evaluator cannot judge the magnitude of the improvement.
A proposal that says “25% improvement” without a comparison point is not giving the evaluator a result.
It is giving the evaluator a question.
3. Validated how?
Even when the baseline and evidence exist, the evaluator still needs to understand how the claim was validated.
Was it tested in a lab?
Was it validated in a relevant environment?
Was it simulated?
Was it derived from peer-reviewed studies?
Was it based on user interviews?
Was it measured in a pilot?
Was it estimated using conservative assumptions?
Each validation method has different strength.
The proposal should make that strength visible.
Example: the difference between ambitious and justified
Consider this sentence:
This sounds ambitious.
It is also long.
But it is still weak.
Why?
Because the evaluator is left with too many unanswered questions.
- Why 25%?
- Compared to which baseline?
- In which workflow?
- In which type of hospital?
- With which patient group?
- Validated with what data?
- Through which mechanism?
The sentence creates an expectation, but it does not justify it.
Now compare it with this version:
This version is stronger.
Not because it is more ambitious.
Because it gives the evaluator a chain they can follow:
That is what justification means.
Not more words.
Better proof.
Better logic.
Better traceability.
Why this matters for Horizon Europe and EIC Accelerator proposals
In competitive EU funding calls, evaluators are not only asking whether the project is interesting.
They are asking whether the proposal has made its case convincingly.
This is especially important in programmes such as Horizon Europe and the EIC Accelerator, where applicants often present complex technologies, ambitious scale-up plans, market projections, technical milestones, regulatory pathways, and impact estimates.
In these contexts, unsupported claims can appear in many places:
- Excellence sections
- State of the art comparisons
- Innovation claims
- Technical objectives
- TRL progression
- Market forecasts
- Competitive analysis
- Impact pathways
- Commercialisation plans
- Work packages
- Risk mitigation
- Financial projections
One weak claim may not destroy a proposal.
But repeated unsupported claims create a pattern.
That pattern can lead evaluators to question whether the team has overestimated the project maturity, market readiness, technical performance, or commercial feasibility.
The evaluator does not have access to your internal context
One of the most dangerous assumptions in proposal writing is this:
They may not.
Evaluators do not have access to the conversations, internal data, customer calls, technical debates, pilot lessons, or strategic reasoning behind the proposal.
They only see the text.
If the proposal does not make the evidence visible, the evidence effectively does not exist for evaluation purposes.
This is why proposals need to be written with an evaluator mindset.
Not only:
But also:
This is also why using generic AI tools without proper evaluator logic can be risky. A text may become smoother, more fluent, and more confident, while still leaving the underlying evidence gaps untouched.
We covered this distinction in more detail here: Ruthless Evaluator vs ChatGPT: why proposal evaluation needs more than rewriting
How to identify unjustified claims before submission
A practical way to review a proposal is to scan for sentences that contain strong claims.
Look especially for words and expressions such as:
- significant
- disruptive
- transformative
- scalable
- cost-effective
- market-leading
- unique
- breakthrough
- highly efficient
- rapid adoption
- strong demand
- substantial reduction
- major improvement
- clear competitive advantage
These terms are not wrong.
But each one creates a burden of proof.
The stronger the claim, the stronger the evidence should be.
A useful internal review exercise is to create a simple claim-evidence matrix.
For every critical claim, ask five questions:
- What are we saying?
This defines the claim.
- Compared to what?
This defines the baseline.
- Based on what?
This defines the evidence.
- Validated how?
This defines the validation method.
- What still needs to be proven?
This defines the remaining uncertainty.
This kind of review can reveal weak points quickly.
It can also prevent last-minute proposal editing from becoming superficial polishing rather than real quality control.
What good justification looks like
A justified proposal claim usually has five components.
1. A precise scope
The proposal defines exactly what is being claimed.
Not:
Better:
Specificity makes the claim easier to assess.
2. A clear baseline
The proposal explains the current situation.
For example:
A baseline gives the evaluator a reference point.
3. A credible mechanism
The proposal explains how the improvement will happen.
For example:
The evaluator needs to see the causal link.
4. Supporting evidence
The proposal shows where the estimate comes from.
For example:
Evidence makes the claim assessable.
5. Transparent limitations
Strong proposals do not hide uncertainty.
They manage it.
For example:
This shows maturity.
It tells the evaluator that the team understands what is proven, what is assumed, and what still needs validation.
A strong sentence is not the one that sounds most ambitious
In EU funding proposals, the strongest sentence is rarely the most impressive one.
It is the one that the evaluator can verify.
That is a different standard.
A good proposal sentence should help the evaluator understand:
- What is being claimed
- Why it matters
- What evidence supports it
- How it was calculated
- What baseline it uses
- What assumptions it depends on
- How it will be validated further
This is what separates persuasive proposal writing from promotional writing.
Promotional writing tries to sound convincing.
Proposal writing needs to be verifiable.
Where Ruthless Evaluator fits
This is exactly why we built Ruthless Evaluator.
Not to make proposals sound better.
Not to add more words.
Not to decorate weak claims with stronger language.
Ruthless Evaluator is designed to help applicants, consultants, universities, research centres, startups, and innovation teams identify the weaknesses that often become painful Evaluation Summary Report comments later.
It helps detect issues such as:
- Unsupported claims
- Weak logic
- Missing baselines
- Unclear assumptions
- Overstated impact
- Untraceable numbers
- Vague validation
- Inconsistent reasoning
- Claims that sound good but are not yet evaluator-proof
The goal is simple.
To help teams fix the proposal before submission, while there is still time to improve it.
Better to find the weakness before the evaluator does
“Not sufficiently justified” is not a request for longer writing.
It is a signal that the evaluator could not follow the chain from claim to proof.
The solution is not more volume.
The solution is better evidence, better logic, and better traceability.
Before submitting a proposal, ask every critical claim:
- Based on what?
- Compared to what?
- Validated how?
If the proposal cannot answer those questions clearly, the evaluator may answer them for you in the ESR.
And that is much harder to fix.
Better to meet Ruthless Evaluator before submission than inside the Evaluation Summary Report.
Run an evaluator grade review on the draft
Upload a version, select programme context, and get structured feedback you can act on.