When Competition Becomes Statistical Elimination
Extremely low success rates may signal prestige, but they can also weaken the innovation ecosystem by discouraging strong applicants and overloading evaluation systems.

A recurring question in European innovation policy is this:
How competitive should funding programmes actually be?
The first results from the EIC Advanced Innovation Challenges provide a striking case.
Two thematic challenges were launched:
- Accelerating Physical AI
- Translating Disruptive New Approach Methodologies (NAMs)
The response was immediate and massive.
- 709 proposals submitted
- €130.7M requested funding
- €6M available budget
- 20 projects expected to be funded
That leads to a success rate of roughly 2.8%.
This was the *first call of a completely new programme*.
Yet participation levels already rival some of the most established Horizon Europe instruments.
What success rates actually signal
Low success rates are often interpreted as a sign of excellence.
The logic is simple:
If only a few projects are funded, the programme must be selecting the very best.
But extreme competition tells a more complex story.
Success rates below roughly 10% begin to function less like selection mechanisms and more like statistical elimination.
At that point:
- Even excellent proposals are unlikely to be funded
- Proposal preparation becomes increasingly risky
- Participation starts favouring applicants with large proposal-writing resources
In other words, extreme competition may not only filter quality.
It can also distort the ecosystem.
The pattern across EIC programmes
The EIC ecosystem already shows similar dynamics across several instruments:
- EIC Pathfinder (May 2025): ~2.1% success rate
- EIC Advanced Innovation Challenges: ~2.8%
- EIC Transition: ~6.5%
- EIC Accelerator: consistently below 8%
These are some of the lowest funding rates in the global innovation funding landscape.
While prestige and visibility increase with competition, the system begins to raise structural questions.
The hidden cost of ultra-low success rates
Very low success rates affect several parts of the innovation ecosystem.
1. Applicant behaviour
Strategic applicants begin to reconsider whether participation is worthwhile.
Preparing a competitive Horizon proposal often requires:
- Months of technical work
- Market analysis
- Consortium building
- Legal preparation
- Professional proposal writing
If the statistical chance of success approaches 1 in 40 or lower, some of the strongest innovators may simply redirect their efforts elsewhere.
2. Evaluator capacity
Extremely large proposal volumes place heavy pressure on evaluators.
Hundreds of proposals per call require:
- Large expert pools
- Rapid evaluation cycles
- Increasing reliance on simplified scoring processes
This risks reducing the depth of evaluation precisely when proposals become more complex.
3. Ecosystem signal
Funding programmes do not only allocate capital.
They also send signals about the innovation environment.
If the perception becomes that the probability of funding is statistically negligible, the programme may unintentionally discourage the very actors it aims to attract.
Efficiency versus ecosystem support
From a policy perspective, extreme competition can appear efficient.
A small budget funds only the very best projects.
But innovation programmes serve multiple objectives:
- Supporting breakthrough technologies
- Encouraging ambitious applicants
- Building strong research and startup ecosystems
When success rates fall too low, the system risks shifting from supporting innovation to merely filtering proposals.
The FP10 question
As the European Union begins designing the next framework programme, FP10, these dynamics deserve careful consideration.
The key policy question is not simply how competitive programmes should be.
It is whether success rates remain high enough to keep excellence engaged.
A system that is too easy attracts mediocre projects.
But a system that becomes statistically unreachable may push the most capable innovators elsewhere.
Finding that balance will be one of the central design challenges for the next generation of EU innovation funding.
Where Ruthless Evaluator comes in
Ruthless Evaluator cannot increase the success rate of a call.
No tool can.
But when competition becomes this intense, proposal quality becomes even more decisive.
Small weaknesses in logic, impact framing, or evaluation alignment can be the difference between:
- a proposal that survives the first stage
- and one that disappears into statistical noise.
Better to detect those weaknesses before submission than inside the ESR.
Run an evaluator grade review on the draft
Upload a version, select programme context, and get structured feedback you can act on.