Friday, June 20, 2025

Why anyone can sell a ten-pound note for a fiver (but probably shouldn’t)

Imagine you decide to go into business, selling your expertise and specialist services to whomever needs them – probably other businesses. Let’s say, for argument’s sake, that your specialist service is material-failure analysis using high-resolution imaging (scanning electron microscopy). You hand in your notice and set out, shopping list in hand, to purchase everything your new start-up will need.

Kitting up

First off, you spend £50,000 on a suitable scanning electron microscope plus necessary ancillaries (you could spend way more on an SEM package, but a relatively small bench-top model will do just fine). You spend a few more thousand – let’s conservatively say £5k – on office equipment and furniture, general office supplies, a basic business website, and some design and branding consultancy. And you find a nice little workshop unit to rent, which will be well-located for your business customers and provide the space you need to accommodate your business activities (a lab area and somewhere to do the office-admin stuff).

Boom – you’re ready to open for business! So, what to charge?

Well, in your old job, working for a university, your gross annual salary was £60,000. You don’t really want to take a pay cut, so based on a 40-hour working week and 45 working weeks in the year you decide to charge £33 per hour (£33 x 40 x 45 = £59,400; not quite £60k, but you’ll pay less in tax and NI as a self-employed sole trader). Simple! And you’ll be super-competitive too – your market research indicates that your competitors are charging way more than this. Ahhh, self-employed life is going to be so good…

I feel like a toucan (wherever I turn I just see a big bill)

Wait up though. Not all of that £33 – or even the bit that’s left over after tax – is going to go into your pocket. It turns out there are quite a few other things it needs to cover. Your little premises is costing you £5,500 a year – a pretty good deal, but it still needs to be covered. Utilities – heat, light, telephone – add another four grand to that. Your accountant wants a grand, and your website hosting and maintenance package is another £500 (cheap!). You’re going to be spending about a thousand on office sundries and miscellaneous stuff each year too. And another thousand on business insurance.

Hmmm, that £60k-ish annual salary is starting to look like less than £50k.

It gets worse. The SEM equipment should last about 10 years, so you’ll need to put aside around £5k every year (ignoring inflation) to cover eventual replacement. In the meantime, the maintenance package and warranty costs £1,000 a year, and that needs to be covered too.

Now your salary’s less than £45k.

Let’s not ignore capital costs. When you started out you spent about £55k on equipment, mostly your SEM machine. If you borrowed that money, then you’ll be paying about £4,000 a year in interest. If you spent your own money then you’ll be missing out on about £2,000 or more in interest that it would have earned. More expense!

And there’s another cost on the horizon, this time a big one. You don’t want to be working every waking hour, but there’s other stuff to do besides looking at broken bits of metal in minute detail through a microscope. Lots of other stuff. The phone needs answering. Invoices need preparing. Overdue payments need chasing. Admin, admin, admin, it all takes time – your time! And actually you could do with a bit of help with the microscopy work too – specifically sample preparation and report writing. So you decide to take someone on part time to help you. The total annual cost of employing them, including on-costs like employer’s National Insurance, will be £30k. Good job they’re only part time!

Now there’s just over ten grand left in the pot to cover your salary. Let’s hope no other unexpected costs crop up, or you’ll soon be working for nothing! Wait – where’s that leak coming from??

Bear with me, there’s a point to all this.

Re-thinking those fees

It’s become abundantly clear that your approach to costing just isn’t viable. When you work everything out, it turns out that to protect your £60k-ish annual salary you’ll need to charge at least £62 per hour for your services – let’s say £65 to be on the safe side. Double what you first had in mind. Ahhh, that’s why your competitors all charge at least sixty quid an hour for the same service – they have to! And if you’re going to be charging £65 this year, then you’ll need to up that to over £67 next year just to keep pace with inflation, never mind anyone getting a promotion or performance-related pay rise. It turns out that balancing the books in a business is tough – you had no idea just how tough! That old job in the uni is starting to look quite appealing…

So what’s the take-home from this cutesy little parable? Well, it’s definitely not ‘don’t go into business’. But whether you decide to set up on your own or you play a part in delivering your university’s business activities, do it with a full understanding of what’s really involved cost-wise.

Why this matters in academia

Universities provide a wide range of specialist services to business, enabled and supported by their extensive and advanced research infrastructure. Business customers don’t commission these services just for the joy of finding stuff out, or because they like giving money to universities. They commission them because they need them. (Moreover, it’s worthy of note that they can get quite generous tax breaks for qualifying expenditure on R&D, including that which is carried out for them by researchers and students in universities.) And, because they operate in the world of business, they understand all that stuff about paying the rent, keeping the lights on and even fixing the roof from time to time – all the stuff we commonly refer to as overhead costs. They know only too well the true cost of being in business. If ACME Microscope Imaging Co provides a good service for £33 an hour then customers will be queuing out the door to use it, right up to the point when ACME goes bust. But if those customers need the service then they’ll pay a proper fee for it, because they operate in the real world and they know what things really cost. They won’t be frightened off by commercially-realistic prices for a good service that they require.

Why then, in academia, do we so often express the fear that charging a commercial rate – often referred to as ‘full economic cost’ – for research and consultancy services will instill such terror in our industry customers? Is it because we, unlike those customers, don’t operate in the real world?

When we see full economic costs broken down they can indeed look a bit crazy at first sight. “What, the overheads account for more than half of what we’re charging them!” But overheads are simply the costs of doing business, and if they’re not fully covered then something has to give. As we saw above, either salaries have to be cut or the business quickly fails. Every business knows this. In universities, the full economic costing model is worked out and monitored by finance professionals, along similar – if somewhat more complex – lines to the above. And, say what you like about finance professionals, one thing they tend to be good at is numbers…

Of course, there’s a key difference here between industry and academia. In industry, the selling is done by the guy with the bright smile and the slick-looking suit. They don’t say stuff like “well the true cost is £10k, but my bloody business insists on sticking another £10k on that to cover their overheads.” Instead, they explain why £20k is a good price for an excellent service, and then hand over to their colleagues in the lab, workshop or wherever to deliver that service. In academia though, it’s usually the individual academic who has to act as both sales-person and the person who delivers the work. And while delivering the work is generally well within our comfort zone, selling our services is often much less so.

And on a serious note

I write this at a time when the UK university sector is deep in a financial crisis that seems to have no obvious solution. The FTSE100 index, by contrast, is close to its all-time high. I don’t dispute that businesses face their own cost and financial challenges, but I reject the notion that sound businesses will not pay a fair price for a good service. And I’m completely sure they know what ‘fair’ looks like. If you have confidence in the service you provide – no doubt you do – then have confidence that it’s worth a fair price that covers the total cost of delivering it. We do ourselves a disservice (and worse) if we sell ten-quid notes for a fiver.


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog.

Monday, January 6, 2025

Let’s start at… the end?

Why putting together your proposal back to front can help make a really compelling case for funding

I don’t eat my Christmas pudding before I tuck into my turkey, and I’m betting you don’t either. I never put my shoes on before pulling on my socks – how would that even work? And beginning a conversation with the word ‘goodbye’ would, quite literally, be a non-starter. But Margaret Mitchell, author of the best-selling novel Gone with the Wind, famously wrote the story backwards. In describing her approach, she said: "I had every detail clear in my mind before I sat down to the typewriter*. I believe […] that is the best way to write a book, then your characters can’t get away from you and misbehave, and do things you didn’t intend them to do in the beginning."

When it comes to writing research proposals, I think we can learn a great deal from Ms Mitchell’s approach.

Spoiler alert?

Writing a grant application is, on one level at least, an exercise in form filling. The application form is often long and rather arduous, and if you’re anything like me the temptation is generally to start at the beginning, filling in the easiest bits first. You might then try to tackle the rest of the form more or less in the same order that the sections appear, perhaps skipping some of the toughest and least-appetising sections to be re-visited at a time when you’re feeling more enthusiastic (ha ha ha). But your proposal isn’t a tax return, it’s a story – the story of why your research project matters so much that it must be funded. To succeed, it will need to be a highly compelling story. And arguably, the most important part of that story is the ending. So I’ll try to make the case that, like Margaret Mitchell and some other prominent authors, you really should start at the end and work backwards.

I should make clear at this point that this isn’t really about filling in the application form. You should only ever do that when you’ve done all the hard work of thinking through, designing and planning the research, and you have a complete and well-formed story to tell (like Margaret Mitchell, you should have every detail clear in your mind before you sit down at your typewriter*). So this is about devising that story. Once you’ve identified a specific problem that your project will tackle and given some thought as to how, in broad terms, you’re going to tackle it, here’s a slightly unusual suggestion as to what your next step should be. Draft the post-project press release.

This is not as mad as it may sound. Sure, you haven’t done the research yet, and so of course you don’t have any project results. But what you should have is a very clear idea of what success would look like. What are the implications for the problem you’ve decided to address if the project goes to plan and yields positive results? Without a clear idea of this, you’d be unable to answer the question "what’s the point of doing this project?" If you can’t answer that, then you can be certain that your reviewers won’t answer it for you.

Consider the following (re-assembled and lightly edited) extract from a 2020 press release about the results of a clinical trial undertaken by researchers at the University of Cambridge:

'Pill on a string' test to transform oesophageal cancer diagnosis

A 'pill on a string' test can identify ten times more people with Barrett’s oesophagus than the usual GP route. Barrett’s oesophagus is a condition that can lead to oesophageal cancer. Around 9,200 people are diagnosed with oesophageal cancer in the UK each year and around 7,900 sadly die. Early diagnosis is crucial to patients’ survival.

The researchers studied 13,222 participants. Over the course of a year, the odds of detecting Barrett’s were ten times higher in those offered the test with 140 cases diagnosed compared to 13 in usual care. In addition, the test diagnosed five cases of early cancer (stage 1 and 2), whereas only one case of early cancer was detected in the usual-care group.

Compared with endoscopies performed in hospital, the test causes minimal discomfort and is quick and simple.

Alongside better detection, the test means cancer patients can benefit from less severe treatment options if their cancer is caught at a much earlier stage.

The test could be a game-changer in how we diagnose and ensure more people survive oesophageal cancer. 

Pretty much everything in that press release except the results of the trial could have been written speculatively before the project began, based simply on the goals and aspirations of the researchers. And, given that it had taken them almost ten years of work to get to this stage, it’s highly likely that their preliminary results could have enabled an educated guess as to what the results might have looked like. Most importantly, the researchers would have known from the very outset exactly what they ultimately wanted to achieve. They had identified a problem (late diagnosis leading to severe treatments and high mortality rates in oesophageal cancer; and the additional problem of unpleasant endoscopic diagnosis), and set out to do something about it with a very clear idea of what they wanted to do and how they wanted to do it. They would have understood clearly what project success would look like.

But research isn't all clinical trials

Ah, but! This was a clinical trial, and while it’s great that they were able to describe such concrete and near-term patient benefit, what about earlier-stage research that’s years away from any real-world impact? What about basic discovery research that’s not designed to achieve any specific impact in the real world?

In neither case do I see any difficulty.

When it comes to early-stage research, it’s widely accepted that single projects seldom lead on their own to substantial impact. Impact is a journey, that typically spans a fairly lengthy programme of research. But imagine setting out on a journey without any clear idea of your destination (gap-year Interrailing aside). Imagine how that in-car conversation might go: "Are we nearly there yet Dad?" "I have no idea, I don’t know where we’re going." There would be tears...

I don’t think that a press release for one of the earlier-stage projects in the pill-on-a-string research programme need have looked much different to the one above. I’m sure the research team had a clear vision, direction and goals from the very beginning. So the press release might simply have used some additional phrases like "This brings us a step closer to…"

Similarly, when it comes to discovery research, it’s vital to be able to articulate a clear understanding of why the work needs to be done. Here, the problem you aim to address could be a gap in scientific knowledge; but you should still have a clear vision for the downstream importance of filling that gap. What could other scientists, including those working in more impact-focused areas, do with that new knowledge? You’re not expected to anticipate every possible outcome, but I think it’s vital to at least be able to envisage some reasonably-plausible outcomes and indicate that these motivate your research. (I accept that theoretical mathematicians and physicists may feel this pressure somewhat less keenly; but I maintain that those working in the life sciences should be acutely aware of it - at least if they want to secure funding for their research.) Our hypothetical press release might allude to eventual impact potential by once again using phrases like "These new insights bring us a step closer to…".

Leave out the hype

You will, I hope, note that nowhere do I advocate resorting to the kind of unjustified hyperbole that some over-eager press officers are guilty of. It’s not uncommon for academics to complain that their university’s press office has over-simplified and twisted their results to make outlandish and overblown claims regarding importance and impact. This sort of thing has no more place in a serious press release than it does in a research proposal where, as I’ve noted before, reviewers quickly tire of over-used overclaims such as 'urgent', 'critically important' and 'transformative'. Important and impressive things, like the numbers in the Cambridge press release above, tend to speak for themselves (okay, I’ll let them have 'game-changer').

All in order

What I’m proposing then is a sequence of inter-related steps in planning and drafting a research proposal which, if followed, should result in a coherent and convincing story throughout which impact is firmly embedded (remember, UKRI research councils and most other funders, particularly medical-research charities, are strongly focused on impact). By starting with a problem, you’ll effectively be starting with impact, and working back from that to construct a strongly impact-driven project (or at least, if it’s basic science, a credibly impact-informed project):

  1. Identify the specific (and hopefully important) problem that your project will address or build towards addressing. Your would-be research project now has a point to it.
  2. At a high level, hatch a plan for using your team’s knowledge, skills and resources to tackle that problem. You now have the makings of a research project.
  3. Identify the specific way/s in which you intend that your project will address (or at least build towards addressing) the problem. This will give your project a specific aim (the main advance that you want your project to achieve) and, if different, an overarching impact aim (the ultimate real-world goal of your wider research programme). It would also inform a hypothetical written-in-advance press release.
  4. Identify the handful of concrete, measurable steps that you’ll need to complete during the course of the project to give you the best possible chance of achieving your aim. These are your research objectives.
  5. In conjunction with Steps 3 and 4, formulate any specific research questions and hypotheses linked to your aim and objectives. This puts more flesh on the bones of your project, and adds measurable specificity to your project goals.
  6. Figure out what work needs to be done to complete each of the objectives, how it will be done, what resources will be needed, who will do the work, and when. Your project now has some work packages.
  7. Think about what might not go exactly to plan, and how you’ll mitigate this – now there’s a risk table and a credible Plan B.
  8. Describe all of this, and any other details the funder asks for, in the application form. You now have a (hopefully winning) research proposal!

There’s a bit more needed of course. What makes you think your proposed approach is novel, credible and feasible? That’s where pilot work, preliminary data and an appreciation of the relevant literature comes in. What will the next steps towards impact be after this project? Describing these shows you have a good understanding of the impact journey and how the current project is positioned on that journey. How can we be sure that yours is the best team to do the project? There’s always an opportunity to explain and evidence this. All of these are the key components of a good 'why fund me?' story, and they coincide with the main points that a reviewer will be looking out for.  

The above steps are often recursive and iterative as you gradually refine your proposal. You might for example need to revisit your research objectives, with knock-on implications for the work programme and possibly even the scope of the project aim. But by starting with a problem, and making sure the whole project is built around addressing that problem in a specific way, you’ll have ensured that impact is embedded throughout rather than clumsily bolted on – and that no one will be in any doubt as to the point of doing the research you’re proposing.

(* Google it, Gen-Z'ers!)


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog.

Thursday, July 11, 2024

Out-whats? Navigating research-project jargon

Some of the specialist terms in research proposals that are often essential but commonly confused

I’ve previously considered the fact that, although semantically very similar, aims and objectives are definitely not the same thing in a research project. Briefly, the project aim speaks to the very point of doing the research – the difference that the project aims to make against an identified problem (which may simply be a gap in knowledge). Project objectives are the concrete steps that the project team will take to achieve that aim. They should be SMART – specific (please!), measurable (definitely!), etc.

But there are a number of other special terms that have specific meanings, and that funders expect or require us to cover in grant applications. Once again, some of these may seem very similar to each other, so there’s plenty of scope for confusion. This blog post seeks to disambiguate and explain some of the most frequently encountered – and most frequently misunderstood – project-specific terms. How can you possibly resist reading on?

Milestones vs. deliverables

Anyone who knows me knows that I love a good analogy. And I often go for something simple that we’re all familiar with, like building a house (not all of us have built a house; but most of us have lived in one). I’d suggest that there are
just two or three main deliverables when building a house:

  • Architectural drawings
  • Any other design documents needed
  • The completed house (of course!)

As the name ‘deliverables’ suggests, they are commonly things that can actually be handed over: the drawings passed from architect to client (and builder); other design documents handed to whoever needs them; and the house (or at least the keys to the house) handed over to the client. There’s a degree of finality about them – “there you go, that’s all done and checked, over to you.”

While a deliverable may also coincide with and be related to a milestone (handover of the completed house to the client is undoubtedly a key project milestone), there will be other important milestones along the way. Breaking ground (digging, to you and me) is an important construction milestone, as are pouring foundations, completing brickwork, ‘topping out’ (putting the highest point in place), completing a watertight building envelope, and so on. Each milestone is a control point (in project-management parlance) indicating that an important and often fairly self-contained phase of the work has been completed, and the project is on track.

In a research project, milestones might include things like gaining ethical approval, recruitment of participants, completion of testing, analysis of data, and completion of the project-closure report. The report will itself be a deliverable, as would other things like policy reports, new software tools, new research cohorts, a new dataset, a protocol for a future clinical trial, a novel prototype and so on. Essentially, if you said at the start of the project that you would have produced something by the project end, then that’s a deliverable. Your funder should hold you and your team to account and make sure you deliver everything that you promised to.

The above examples of deliverables are all tangible – they’re things that you could physically hold, or at least look at on a computer screen. But deliverables can also be intangible, and could for example include a cohort of trained participatory researchers, or the exchange of knowledge or a specialist technique to another
organisation.

Research projects that aim to achieve impact, and rely on the actions of others to achieve that impact, need to ensure that at least some of their deliverables are actionable. This simply refers to the fact that they have been designed to facilitate and support some form of next-steps action, be that a policy change or a change in behaviour. A new dataset is not necessarily immediately actionable; a policy-recommendation report or health-information video much more so.

Outputs vs. outcomes

Out-whats? Outputs and outcomes are frequently confused, but they’re neither synonymous nor interchangeable.

Outputs are the things that you will produce during the lifetime of your project. They may be tangible – things like project reports, datasets, new patient-reported outcome measures (PROMs), papers, videos, popular-science articles, patents, protypes and so on; or intangible – such as new processes, concepts and interventions (although these are likely to be tangibly documented), new collaborations, or perhaps a newly-upskilled researcher.

Hold on I hear you say – aren’t these the same as deliverables? They’re certainly very similar, and it’s true to say that every deliverable will be a project output. But not all outputs are necessarily deliverables. Academic papers and popular-science articles, for example, may well be expected of you, but they may not be part of the set of specific deliverables that you signed up to producing. The same might be said of a public-engagement event (an intangible output). Perhaps a new cohort was recruited, but this was simply in order to produce one or more of the pre-agreed deliverables. I concede, it’s subtle. And I’ve never seen a reviewer quibble over the difference between the two, so we probably don't need to sweat about this too much.

Outcomes, on the other hand, are different from outputs and deliverables. If an output is a means to an end, then an outcome is that end (although see also impact below).

To take an intangible example, if the output is three early-career researchers trained in the CRISPR-CaS9 gene-editing technique, then the outcome might be an up-scaling of capacity and capability in this particular aspect of research at their institution. If the output is a new research dataset, then the outcome might be that scientists in a particular field are able to conduct new analyses to test new hypotheses and answer previously unanswered research questions. If the output is a prototype for a new medical device, then the outcome might be that the research team can move to the next stage of development that would see it tested using human participants. In short, outputs tend to be things; outcomes tend to be things that happen.

Impacts

The terms outcomes and impacts are sometimes used interchangeably, but (and I feel there’s a theme emerging here) they’re not always the same. They are linked though, and impacts often flow in the longer term from outcomes. By way of an example, some population-health research in the area of breastfeeding might produce an informational video aimed at new mothers (an output). Widespread engagement with the video might result in new mothers in the target region being better informed about good breastfeeding practice (an outcome); and breastfeeding rates in that region might subsequently increase (an impact). Over the longer term, increased rates of breastfeeding might result in improved neonatal and later-life health outcomes, which of course is what really matters and was no-doubt the ultimate impact aim of the research project. Again, the difference here can be quite subtle, but it’s real.

Intellectual property

Intellectual property – IP – often causes confusion among research-funding applicants, who are after all typically not specialists in this area (as I am also not). We commonly equate IP with things like music composition, literary authorship, new software and commercial patents. But in the world of research, the definition of IP is broad. A number of project outputs are likely to meet this definition. And so the line “we do not anticipate that any IP will be created” just won’t cut it for the ‘IP Management’ section of a research proposal.

Given that this is a glossary of sorts, here are a couple of further definitions. Background IP is IP that will be required for/used in the project, that exists already. Foreground IP – sometimes referred to as arising IP – is produced during the course of the project.

When it comes to research, there’s quite a wide range of outputs that count as (foreground) IP and will thus need to be managed appropriately. They include, for example:

  • New and updated/altered software, computer code and/or algorithms
  • Inventions, including designs and patterns (for example a new medical device), whether patentable or not
  • Other know-how, ideas and concepts
  • Databases and datasets
  • Laboratory notebooks
  • Project reports and documents
  • Protocols, standard operating procedures and processes developed during the project
  • Any branding, symbols and words/slogans developed for the project (it is not uncommon for a large research project to have its own brand presence)

Involvement (in research)

Since we seem to be going to town on the ‘I’ section of this glossary, here’s another one – involvement. As in patient and public involvement (PPI). I’d normally advise against defining something by stating what it is not, but, when it comes to research, involvement is not engagement (i.e. getting out there, meeting people and engaging with them in some way with regard to your research). And it’s definitely not participation (i.e. being a volunteer subject in a research project).

Public (and patient) involvement is about doing research with members of the public, not just to them (as participants/volunteers), about them (as aggregations of data points) or for them (as beneficiaries). By involving people, you are bringing in their experience – and their views and ideas based upon it – and helping to ensure that the research is relevant because it has been designed, conducted and disseminated in conjunction with some of the people whom it is intended to affect. Wherever it’s appropriate, public involvement in research is a good thing.

Project partners

Once again, semantics sets a trap for the unwary. Surely any third party – whether an individual or an organisation – who does something with us in a project is a project partner? By this not-unreasonable definition, our co-investigators are our project partners, as are public-involvement contributors and any other collaborators. And yet…

Although different funders may use the term ‘partner’ somewhat differently, the UKRI definition is fairly widely applied. UKRI defines a project partner as follows (my bold):

a third party person who is not employed on the grant, or a third party organisation, who provides specific contributions either in cash or in kind, to the project. […] As a rule Project Partners are expected to provide contributions to the delivery of the project and should not therefore be seeking to claim funds from UKRI.”

Project partners are typically businesses, government organisations such as executive agencies, NHS trusts or health boards, or third-sector organisations. It’s worth noting though that the Horizon Europe scheme takes a different approach. Here, any type of organisation can receive funding for their involvement in a project so long as they have the operational and financial capacity to carry out the tasks assigned to them. All members of a Horizon Europe project consortium are referred to as consortium partners.

Including all this information in your proposal  

Some of the above, such as project partners, PPI and IP, are usually dealt with in dedicated sections of the application. When it comes to objectives, milestones, deliverables, outputs and outcomes, it’s generally good to include these in your approach and methodology section, clearly linked to the particular work package/s (or similar tranches of the project work) to which they relate. They are all important parts of the story. If the funder requires a Gantt chart or workplan (or if you decide to include one), then this should also capture these elements of the project. Your project aim and associated impact goal/s should of course be central to the proposal, and should run through it like a thread. They provide the answer to the important but potentially devastating question  "what's the point of this project?"


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog.

Monday, April 29, 2024

Seven (more) Deadly Sins? Common reviewer criticisms (Part 2)

The second part of my list of the most common points of criticism from grant reviewers in the life sciences

I recently blogged about common reviewer criticisms and how to avoid them. This involved a trawl through grant reviews and panel-feedback letters, to draw up a list of recurring themes. And the list turned out to be quite long – longer in fact than could reasonably be covered in a single blog post. But given how often these criticisms crop up, it seems reasonable to draw attention to them and highlight strategies for making sure they never appear in feedback on your own proposals.

My last post on the subject offered seven commonly-seen criticisms from reviewers and grants panels. Without further ado and once again in no particular order, here are seven more:

1. “The level of project risk appears unacceptably high” / “Potential risks do not appear to have been adequately considered”

This criticism might be phrased in general terms as above, or it could focus on a specific point. Funders tend to be risk averse, and even those that claim to tolerate a higher-than-normal degree of scientific risk (i.e. high-risk, high-gain research) will balk at unnecessary project risks. We saw an important example of unacceptable project risk in the previous post – over-ambition and the associated risk that it won’t be possible to get everything done with the time and resources available.

Specific project risks can be diverse, but they’re frequently eventualities that, if they arose, could scupper the project. Perhaps it seems unlikely that you’ll be able to recruit enough participants or elicit enough responses. Maybe you’re too reliant on collaborators over whom you have little control, possibly because they’re overseas. Perhaps you’ve front-loaded the project with the riskiest work, which could bring down the whole show in year one if it fails. Maybe inter-dependencies between multiple work packages create prominent points of failure.

Avoiding this criticism

Reviewers will expect you to have considered objectively what might potentially go wrong. You should assess how likely it is to happen, quantify the severity of the impact on the project if it were to happen, and consider what you can do both to reduce the risk of it happening and to mitigate the consequences if it did. It’s common to use a red, amber green (RAG) grading scheme for risk likelihood and severity.

Be thorough and use a bit of imagination. All too often, I see very short paragraphs on risk that seem to suggest the applicant has led a very charmed life where nothing has ever gone wrong. This isn’t about listing all of the force majeure occurrences that could potentially scupper any project – “lab burns down”, “Co-I steps under bus” etc. It involves assessing risks that are specific to the project. Make sure that mitigation measures are specific, practicable and likely to be effective.

The best way to show you’ve done all this is to include a risk table or matrix in your application. If you don’t have the space, summarise key risks and mitigations in narrative form, and indicate that you’ve prepared a full risk table which is available on request.

Funders do realise that it wouldn’t be research if we knew exactly what will happen. They’re not looking for risk-free projects, but they do want reassurance of the likelihood that whatever happens, some good scientific progress will be made. It’s all about showing that you’re aware of the risks, have minimized them to reasonable levels, and have a sound Plan B.

There’s more on risk analysis and management in research projects in this article – Don't let your proposal fail due to a poor risk analysis.

2. “There is little or no evidence of training and career-development activities for research staff”

You may have noticed that funders are placing more emphasis on capacity building and skills development among the research workforce. Even if a grant is not specifically training focused (like a fellowship is), it represents a major training and career-development opportunity for all involved. Research staff in particular have an opportunity to acquire valuable new skills and experience upon which to build their future careers. The funder will expect you to maximise those opportunities, and if you don’t seem to be doing that then your proposal is likely to attract criticism.

Avoiding this criticism

Training and skills development activities can be formal or informal. When it comes to formal training, be aware of what opportunities exist for research staff within your institution, and indicate how you will support your researchers in identifying and accessing them. If a researcher will need to acquire specialist skills to undertake part of the work then make clear how this will happen, and how you and the project will support this process.

We all learn through doing, and working as a research assistant or postdoc offers valuable opportunities for experiential learning. Once again, describe how you’ll support the learning process, through things like mentoring and opportunities to take on meaningful tasks that will challenge and extend skillsets. These might include opportunities to attend and participate in conferences; write papers and grants; undertake public-engagement activities; learn specific techniques; participate in project management; and build network links. Look to provide constructive feedback, give credit when it’s deserved, and celebrate achievements collectively. Don’t underestimate the extent to which an early-career researcher or other team member could benefit from your particular knowledge, experience and insights, and actively look for ways to share your expertise throughout the lifetime of the project.

Your description of staff-training and development activities needn’t be lengthy, but it should be clear and specific. If you have to complete a justification of resources section then this can often be a good place to cover some of the specifics of staff-development activities and associated costs.

3. “Value for money is not obviously high”

The first thing to note is that the evaluator should not be complaining that the project is costly – i.e. a lot of money – per se. If the total sum requested is within the funder’s maximum limit then a meaty budget shouldn’t on its own make it uncompetitive. Reviewers are however frequently asked to comment on a proposed project’s value for money, and to do so they need to consider the cost of the project in the light of the possible benefits (including new knowledge) that would come from its completion.

In a world where financial resources aren’t infinite, if reviewers and/or grants panels feel that a research proposal doesn’t offer good value for money then it’s unlikely to be successful, regardless of its merits.

Avoiding this criticism

Justify everything in your budget – people’s time, equipment and consumables, travel and so on – explaining why you need it and how you arrived at the cost. I’ve previously written a blog post about justifying resources. Try to make sure there isn’t an obvious way of achieving the same or similar results while spending considerably less on resources.

Having satisfied the funder that they’re not over-paying for the project and its planned outcomes, you’ll need to convince them that those outcomes are worth the cost. Ideally, you’ll try to quantify the anticipated impact in financial terms. For example: “The NHS currently spends £55m annually treating 25,000 patients for this painful condition. Our proposed new care pathway would reduce the per-patient cost to less than £2,000, leading directly to savings of at least £5m per year.” If the project will cost £1.2m then this would seem to offer very good value for money.

For various reasons, it’s not always easy to quantify impact in financial terms. Many projects won’t on their own lead directly to clinical (or other health/wellbeing) impacts, and often those impacts don’t have an easily-calculated financial value. This shouldn’t stop us from seeking to spell out, in a realistic and proportionate manner, the nature of the intended impact in such a way that it can be viewed objectively alongside project costs. In general terms, medical, health or care research that isn’t obviously excessively priced will be deemed worthwhile if it will yield a modest benefit for a large number of people; or a substantial benefit for a smaller number.

4. “Claims about the impact of the research are insufficiently backed up”

“Impact claims are unsubstantiated.” Hmmm. When we’re really excited about a research project it can be hard to resist letting our enthusiasm run away with us. And it can be easy to slip into persuasive hyperbole when it comes to describing its importance and potential.

Who hasn’t at some time described the research need as “urgent”? Referred to the need for the research as “imperative”? Or claimed that the impact of their project will be “transformational”? Do such descriptions convince the evaluators, or do they just irritate them?

Imagine you’re browsing the shelves in a cookware shop. A sales-person approaches: “Sir, this air fryer will change your life! It’s a truly amazing product, and you’d be crazy not to buy it!”

This unevidenced hyperbole won’t convince me and will probably just annoy me. I’m quite capable of making my own mind up, I’m highly sceptical about ‘life-changing’ claims for products, and like everyone else I’m firmly convinced that it’s the world that’s crazy, not me. A different approach might have been more productive: “I think air fryers’ popularity is well deserved – they use up to 80% less oil than deep-fat frying, they can reduce harmful acrylamide production by as much as 90%, and they only use around half the energy of a standard electric oven. You can check those figures and find out more about their benefits on the AllFactsAboutAirFryers.com website if you’re interested.”

I think that, like me, reviewers and grants-panel members prefer to make up their own minds, based on the facts and evidence, rather than being told what they should think. Your idea of what constitutes ‘transformational’ may after all be very different to theirs.

Avoiding this criticism

The solution here is really simple – avoid hyperbole. Think twice before peppering your proposal with adjectives, and instead let the facts and figures do the talking. Have faith in the importance and relevance of your research. If you do make a bold claim that the research is transformative, urgent or whatever, then be sure to back that claim up very convincingly with objective evidence.

I previously wrote a blog post on this subject – the inadvisability of using adjectives and hype in research proposals.

5. “Impact plans are insufficiently detailed”

Impact plans (proposed impact activities) are the things that you and your project team will do to maximise a project’s potential for impact. They’re usually vital for ensuring that your project results – perhaps just some numbers in a spreadsheet – can escape from the confines of your laptop and be translated into changes that will ultimately benefit people ‘in the real world’.

If reviewers aren’t convinced that your claims for impact are likely to come about, because they just don’t understand how your results will ever have an influence on policy, practice, procedure, products, processes or whatever, then they’re unlikely to recommend funding. Remember – impact is usually the answer to the question “what’s the point of doing this research?”

Avoiding this criticism

Whole books have been written on impact, so what follows will just be a very brief overview. Impact commonly involves some sort of change, for example a change of government policy or the adoption of a new way of doing something. If your project results and outputs are going to trigger and inform such change, there are usually two key questions that you must answer:

  1. Who are the key decision-makers who need to know about your results?
  2. What is the best way to reach those individuals and groups with your results?

It’s often useful to do a stakeholder-analysis exercise to determine who the main interested parties are and formulate a communications and engagement plan. In many cases, it’s highly advantageous to bring some key stakeholder representatives on board from the outset, so that they’re close to and involved with the research. If for example influencing the way social workers operate will be key to impactful change, then it’s probably important to include some social workers and others involved with social-services policy on a project advisory board or similar.

As in all other aspects of bid writing, specificity is paramount. “We will engage with key stakeholders through a series of workshops to ensure impact” carries no real weight at all. “We will communicate our results and best-practice recommendations directly to care home managers through an article in Care Home Management (which has agreed in-principle to publish our findings), and provide a link to an interactive online decision tool that we will publish on both the University’s and funder’s websites” does a much better job.

If your impact plans depend on producing and distributing some sort of ‘toolkit’ then be specific about what this would look like (my default understanding of a toolkit is a bag full of hammers and spanners – will it look like that??) If the plan is to produce a ‘policy report’ or similar then be realistic about the chances of the right people reading and acting on it. I have heard it suggested that the shelves of government are lined with hundreds of unopened, unsolicited policy reports sent in by academic researchers, all of which are slowly gathering dust.

UCL has published a useful Guide to Creating Impact from Research.

6. “The proposal is not truly interdisciplinary”

This criticism can sometimes be a version of Point 4 above – you’ve made a claim, in this case about interdisciplinarity, but it isn’t supported by the facts. But funders are increasingly publishing grant calls that specifically require interdisciplinary approaches, so they’re on the lookout for applications that that don’t actually hit this key criterion.

Avoiding this criticism

Multi-disciplinarity is not interdisciplinarity, and a claim to the contrary will likely be met with the above criticism.

Imagine a bike-racing event that involves a 50-mile lap on the road, followed by a 30-mile lap on off-road trails. Competitors start the event on a skinny-tyred road bike, and then switch to a mountain bike for the off-road section. This is a multi-disciplinary event, involving two distinct cycling disciplines.

Now imagine an event comprising a single 80-mile circuit that intermittently takes in both lengthy tarmac sections and loose trails. Neither a mountain bike nor a road bike would be very suitable – the road bike would be a nightmare on the trails, while the mountain bike would be slow and hard work to ride on the road. Instead, a new type of bike is called for – a gravel bike. Gravel bikes share characteristics with both mountain bikes and road bikes, and are excellent for riding long distances over mixed terrain. A gravel-bike race borrows from and integrates two existing cycling disciplines and is truly interdisciplinary. Indeed, the popularity of gravel biking has seen it emerge as a new discipline in its own right.

As in cycling, so in research. A proposal that would see physicists working alongside biomedical researchers, with each completing a separate part of the project, is multi-disciplinary; but one that would involve researchers working at the interface between medical science and physics, integrating approaches from both disciplines and crossing disciplinary boundaries to prevent, diagnose or treat disease, is likely to be demonstrating an interdisciplinary approach. Biomedical physics has indeed emerged as a discipline, and involves the application of physics to medicine for diverse applications such as the study of biomolecular structure in disease states.

(Transdisciplinary research, by the way, is not synonymous with interdisciplinary research. Rather than sitting between two or more traditional disciplines it transcends them. It is holistic and sits apart from the disciplines from which it emerged, being much more than just the sum of their respective parts. The recently-emerged field of climate-change research is, for example, substantially transdisciplinary.)

To avoid receiving this particular criticism, it’s essential to understand what constitutes multi-disciplinary and interdisciplinary (and indeed transdisciplinary) research sufficiently clearly to judge which category your project falls into. If you’re still feel unsure about how the categories differ then this nice diagram may be helpful.

7. “The application is poorly written” / “The proposal shows a lack of attention to detail”

I don’t suppose that most reviewers consider reading proposals to be an unbridled joy. People volunteer for peer-review colleges and grants panels for a number of different reasons, but I suspect that pure pleasure isn’t usually one of them. So we should see it as our duty to take every possible step to prevent the whole experience being more unpleasurable than necessary.

Spelling mistakes, typographical errors, poor grammar, inconsistencies, strange punctuation and downright sloppiness will all impede the reader’s attempt to understand and appraise your proposal. Even though reviewers are never asked to evaluate the typographical quality of proposals, they’re very often unable to resist showing their frustration in their reviews. “There are spelling mistakes and other typographical errors, suggesting a lack of care on the part of the applicant”. The implication here of course is that someone who can’t devote due care and attention to their proposal might not be inclined to lavish much effort on delivering the project.

Avoiding this criticism

A little bit of me dies each time I see this criticism in a review. Regardless of how excellent, novel and relevant the research idea, the applicant has succeeded in irritating the reviewer to the extent that they are – in part at least – negatively disposed towards it. And it’s so easily avoided!

Leave enough time at the end of the writing process to read through everything properly and in its entirety, at least once. This should ideally be done with fresh eyes. Ask others to look through and proof-read drafts too. Be sure that all of the proof-reading tools in your word-processing software are switched on and properly configured, and use the spell-checker (and grammar-checker) tool liberally. Pay attention to consistency – if you used the UK English version of a word in one place, don’t then switch to the US version elsewhere; if you gave a word a capital letter in once place then do so throughout. Spell out acronyms and abbreviations in full on first use. Try to keep punctuation simple and unobtrusive. Aim to submit a ‘pretty’ document with good and consistent use of white space, consistent font type and size, helpful subheadings, and so on. While you should view the idea of letting ChatGPT and other generative AI loose on your research proposals with caution, AI tools are increasingly good at proof-reading and tidying up sloppy text if they’re properly supervised. Do check their output carefully though – don’t let ChatGPT be the last thing to edit your proposal.

The best of the rest (the rest of the worst?)

There are other criticisms that crop up quite regularly but didn’t make the cut here. ‘Plans for participant recruitment seem unrealistic’; ‘expertise required to complete the project appears to be missing’; ‘the applicant appears to have ignored [something of relevance] in the literature’; ‘there is a lack of clarity or missing information in describing [whatever]’. Happily, they’re all readily avoidable through legwork, attention to detail and thorough project planning.

The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog. If you think you recognise your own panel rejection letter in the above list, please be assured that it is a composite of the main recurring themes taken from a large number of different proposal reviews 

 

Monday, March 18, 2024

Grant applications: Common reviewer criticisms (and how to avoid them)

Some of the most common points of criticism that I see raised time and again by grants evaluators in the life sciences

Reviewers and panel members’ criticisms fall broadly into two categories: those that are linked to highly-specific aspects of the science, approach and methodology; and those that highlight a more general flaw in the proposal. By definition, the former tend to be one-offs, more or less unique to a particular grant application. But the latter are, in my experience, frequently identifiable as recurring themes that come up with frustrating regularity.

I can’t offer you any words of wisdom on avoiding specific scientific and methodological criticisms, beyond urging you to do your best to design your project so as to avoid criticisms like the following: “It is apparent that no data will be collected if an asteroid strike doesn’t occur during the period of the study”. (Yes, I made that one up.) But, given how often I see the same sorts of general criticism crop up, I think it should be reasonably easy to prepare a checklist of common bear traps, and offer some thoughts on how to avoid them. So, in no particular order:

1. “The study seems (over-)ambitious”


When the word ‘ambitious’ appears anywhere in a proposal review or panel feedback it’s a huge red flag. Although the word can be laudatory in other contexts, it’s usually a killer blow in a review – along similar lines to remarks like “that dress is a really brave colour!” Ouch. Sometimes it’s used somewhat euphemistically – “the proposed study appears highly ambitious”. Other times, the gloves are well and truly off – “the project is over-ambitious”.

Accusations of over-ambition might relate to the timeframe, the (insufficient) amount of funding requested, the size and experience of the team, and the availability of resources – often several of these things. Early-career researchers in particular seem prone to proposing projects that get shot down for being over-ambitious by seasoned reviewers who know how much – or indeed how little – can actually be achieved in the course of a three-year project.

Avoiding this criticism

No one wants to submit a project proposal that’s obviously under-ambitious. So the answer here definitely isn’t to propose doing one year’s worth of research over a three-year period.

Part of knowing how to avoid this criticism comes with experience. An experienced academic who’s previously delivered a number of substantial research projects is reasonably unlikely to suddenly put together a proposal that’s wildly unrealistic. If you don’t yet have that experience yourself then try to borrow it. Ask a seasoned colleague (such as your mentor, supervisor, group lead or perhaps a senior-level collaborator) to look at your proposal specifically through the lens of whether it’s realistic and feasible. For your own part, try to break down the project with a degree of granularity into tasks and sub-tasks. Think carefully about who will do each task, what they’ll need in order to do it, how long it should reasonably take, and how much contingency might be sensible to build in. Once you’ve done this it should be easy to prepare a Gantt chart to illustrate your project timelines – a visual representation of your key steps, their scheduling and the time needed for each. Ask yourself what could derail your plans, how likely this is to happen, and how that risk could be minimized. As a general rule, long lists of objectives will terrify reviewers. Three main scientific objectives is often plenty for a three-year research project.

When it comes to budget, the overambition-related killer blow often looks like the following: “The proposed study cannot be completed with the requested resources/funding”. This should be quite simple to avoid. When you do your granular-breakdown exercise, work out exactly what will be needed for each task in terms of human resources, equipment (including access charges) and consumables; get reasonably-accurate costs for everything; and be sure to justify everything in your budget narrative (justification of resources – more on this below). Leave no one in any doubt that you very obviously need everything that you’ve included in the budget – no less, but definitely no more.

2. “It is unclear who will be doing what in the project”

This criticism comes in various different guises – “certain team members’ inclusion is not justified”, “the roles of individual team members are not clearly explained” and so on. The reviewer is basically thinking “why on earth is this person named on the grant?” They may also be suspecting that someone has been named because they’re a high-profile ‘big beast’, or perhaps because you owed them a favour.

Avoiding this criticism

This should be really easy to avoid. Firstly, only include people on the grant if you genuinely need to do so. Perhaps you need them to carry out specific tasks, or to lead a particular work package, because they have specific expertise and access to infrastructure or resources. Maybe you need them to bring their expertise to the table in an advisory capacity.

Secondly, explain clearly what they will be doing in the project. Your explanation doesn’t have to be lengthy, but it should be specific. Do make sure that their time commitment on the project is sufficient for them to get everything that they’ve signed up to done.

Part of justifying an individual’s inclusion on a grant is to make clear that they have the specific skills and experience needed. CVs help in this respect, and narrative-style CVs are perfect. But it’s often wise to cover this briefly in your proposal too – for example in a work package description (“Dr X will lead this work package, where she will use her expertise in LC-MS for the detection of novel psychoactive substances in blood samples”).

3. “PI/Co-I time commitment is not sufficiently justified” / “PI/Co-I time commitment seems insufficient”

This is closely related to both of the above criticisms. An evaluator is worrying either that it’s not clear why someone needs to be costed into the grant for however many hours per week; or that it seems very (too) ambitious for the person or people to complete all of their project tasks in the time allowed.

Avoiding this criticism

Again, this should be easy to avoid. In fact, if you’ve taken steps to avoid the above two criticisms (scale of ambition and division of tasks) then it should never even crop up. Work out carefully who will be doing what, and how long it should take them; ensure that your time allocations and budget properly account for this; and describe and explain it all clearly in your proposal.

4. “Costs and resources are insufficiently justified”

Here we go again. You’re asking for something in your budget, but it’s not at all clear why. Why do you need all those people on the team, and how was each person’s time commitment worked out? Why do you need that bit of kit? How did you arrive at that travel figure – and why do all those people have to go? Indeed, why go at all?

Avoiding this criticism


Explain and justify everything – don’t just list it! Give sufficient breakdown where necessary rather than just including high-level totals, and explain and justify your choices (e.g. choice of conference) and numbers (e.g. number of attendees). You’ll often be given a whole section of the proposal in which to explain your requests for resources, and the clue here is very much in the name: Justification of resources. So do be sure to justify them.

I’ve previously written a whole blog post on this subject, so rather than repeat myself at length here I’ll simply provide a link

5. “Sample size is not justified” / “The sample size seems insufficient” / “The study appears inadequately powered”

You said that you were going to include X number of subjects/participants/samples in your study – but how did you arrive at that number? Why shouldn’t it be 10 more, and why couldn’t it be 10 less?

I see this criticism a lot. Reviewers and panels seem very fixated on sample size and power calculations, and not without reason. Every extra participant or sample that you include is likely to come at a cost to the project budget; but if you fail to include enough of them then your data, results and conclusions are likely to be incomplete, unreliable, misleading or otherwise meaningless. In short, sample-size calculation can make or break a research project.

Avoiding this criticism

To the uninitiated, sample-size calculation can be something of a minefield. It’s not uncommon to see concerns over sample size raised by reviewers, only for the panel subsequently to complain that the applicant has failed to fully address these concerns in their rebuttal. You could fit everything I know about sample-size and power calculations into one word - nothing (I’m a bid writer, not a statistician). So the best advice I can give is to find a good statistician working in your field of research (e.g. a biostatistician or trial statistician) and seek their sage advice. Depending on the nature of your project it may be appropriate to include them on the grant as a collaborator or co-investigator. If you’re not going to include them then remember the old adage – ‘what’s in it for me?’ At the very least you’re going to owe them a favour.

If you want to find out more about sample-size calculation, so as to reduce your reliance on others, then plenty has been written on the subject. See for example this paper on sample-size calculation in medical studies.

6. “PPI activity is insufficiently developed/described” / “There is little evidence of genuine co-creation”

PPI – patient and public involvement – is increasingly being expected by funders in the life sciences, even for basic biomedical research. They want it to be meaningful not tokenistic, and they’re (rightly) emphatic that involvement is not synonymous with engagement (and it’s certainly not the same as being a participant). Telling people about your research is important, but wherever possible and appropriate you should look to involve them. While no one is expecting you to get members of the public to do your lab experiments, you should always remember that, for research that’s designed to affect people’s lives, it’s better to do things with people rather than just to them (or for them).


Ideally, if PPI is appropriate to your research then you’ll have started to do it well in advance of any grant application. This will help you set your research agenda and define key research questions, and it will inform and shape the development of your proposal. If you can indicate in your proposal that your project demonstrably reflects the needs and priorities of the groups of people whom it has been designed to benefit, then you’ll be well on your way to convincing reviewers that it warrants funding. Your project itself should if appropriate incorporate on-going PPI, through which your patient/public team members can help to steer the project’s direction and keep it relevant; contribute to key decisions; and interpret and disseminate findings. A genuinely co-produced project will include PPI partners as equals when it comes to things like agenda setting and decision making. You can generally request budget for PPI activities where they’re appropriate.

Conversely, if you fail to include meaningful PPI when you should then your reviewers won’t be impressed. At the very least they’ll want to see PPI activities within the project, and they’ll often expect you to demonstrate how PPI helped to shape the project. They’ll be looking for a clear and specific description of PPI activities that you have undertaken to date and/or planned for the project.

Avoiding this criticism

The answer here is straightforward to the point of flippancy; include PPI partners in the co-development of your project wherever possible; organise and plan some specific and meaningful PPI activities whenever appropriate; and describe all of this very clearly in your proposal.

PPI can sometimes seem challenging, particularly in disciplines like lab-based pre-clinical research where opportunities for meaningful involvement are not always obvious. And for anyone seeking to do pre-project PPI activities, it can sometimes be a challenge to find the financial resources needed to cover the costs involved. There are some good resources and sources of support for PPI on the NIHR and Health and Care Research Wales websites.

7. “The PI has no prior experience of managing a project on this scale” / “The lead applicant has not previously led a large grant”

There’s clear potential for a chicken-and-egg situation here. If previously having led a large grant was a prerequisite for being awarded a large grant, then no one would ever get one. The reality is a bit more nuanced. Funders tend to be risk averse when it comes to handing out their money, and giving a million or two to someone whose previous largest grant was £50k represents a clear and unacceptable risk to most. If you apply for very substantial funding that’s an order of magnitude or so greater than anything you’ve been awarded before, the answer’s quite likely to be no.

Avoiding this criticism

The key here is to show stepwise progression, where the award of this large grant would be a logical next step in terms of both the applicant’s career progression and their research programme. If you have previously delivered two or three £300k – £500k grants, for example, and now you’re applying for £1.2 million to host a research centre in the same field of research, then your position as lead applicant should be pretty credible. Be sure to explain in the proposal narrative why your relevant track record equips you to lead on this latest major proposal.

If you just don’t have the track record, then realistically the odds are probably stacked against you. There’s no quick way around this – it obviously takes time to build that track record. Perhaps the only quick fix would be to find someone else with more experience to lead the grant, with you as a substantive co-applicant.

Seven textbook criticisms – is that the lot?

To write this blog post I drew up a list, and it ended up being rather a long one. It turns out that there are quite a few textbook errors that people commonly make in grant applications, that result in textbook criticisms from reviewers. Excessive risk with insufficient mitigation; unsubstantiated impact claims; even bad writing, sloppiness and typos. The list goes on – long enough, in fact, to fill another blog post. So watch this space.

What to do if your proposal receives these criticisms

In some cases you’ll get a second chance to put things right. Where reviewers have raised a criticism and you have a chance to respond to their reviews, be sure to grasp the opportunity in full. Give the panel the extra information they’ll need, and/or clarify whatever it was that you failed to explain clearly the first time around. If necessary, act on the reviewers’ advice to make key changes to your proposal – for example to reduce the level of (over-)ambition. I’ve previously blogged about writing a PI response to reviews – here’s the link.

Failing that, there’s little more you can do than just take it on the chin, and learn from the experience – and resolve to make sure that you never receive that particular criticism again.


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog. If you think you recognise your own panel rejection letter in the above list, please be assured that it is a composite of the main recurring themes taken from a large number of different proposal reviews