Friday, March 17, 2023

Can ChatGPT write my research proposal?

Disclaimer: This blog post was written at a time when ChatGPT was still very new and exciting, and the world was still getting to grips with its capabilities and their implications. There will come a time – probably quite soon – when everything written below will feel as dated as an article about the miracle of powered flight.


Unless you’ve been living in a cave for the last few months, you’re likely to have come across ChatGPT. Built on a fairly* ground-breaking conversational artificial intelligence (AI) system, ChatGPT uses a large language model (a probability-based deep-learning algorithm that determines how likely a given word is to follow – or to come before – another in a particular sequence), trained on a huge body of existing text, to produce written responses in an interactive dialogue format. This essentially makes it an AI chatbot, albeit a pretty smart and very capable one. It can write stories, put together a decent-ish undergraduate student essay, compose music and even write and debug computer code. Because it was trained on a large chunk of the internet, including articles, books and English-language Wikipedia, it has an impressive ‘general knowledge’ because it ‘memorised’ a huge volume of facts during its training. Fairly predictably, the sudden appearance of ChatGPT and the dawning inevitability that its offspring and successors will be even more impressive has been met with reactions that range from wonderment to dismay. And, while some people who work in content-creation roles are looking anxiously over their shoulders and wondering how long it will be until ChatGPT or something similar pinches their job, others are looking to the technology with a view to making their lives more productive and perhaps a bit easier. Leaving aside the blatant cheating exemplified by the hung-over social sciences student** who needs an essay on the political views of Karl Marx by tomorrow morning, is AI in general and ChatGPT in particular a tool that we can legitimately use in academia to increase our productivity and improve the quality of our output? More specifically, can we use ChatGPT to write research proposals?

I think the answer is yes. And no. To explain: Do I think that ChatGPT can write your research proposal for you? No, I don’t. But do I think ChatGPT is a useful tool that could be employed as part of the grant-writing process? I do.

So how does it perform?

There’s no doubt that ChatGPT has a basic 'understanding' of what some of the main constituent parts of a research proposal are. When I asked it to “write a research proposal about developing a general vaccine against cancer”, it gave me a title ("Development of a General Vaccine Against Cancer: A Proposal for Research" – okay, not particularly inspired); a neat little introduction that told me a bit about what cancer is, the limitations of current treatments, and the general benefits of vaccines; an objective (which, if I’m quibbling, I’d say looked more like an aim); some very high-level methodology; and a kind of impact summary (as in, "a vaccine for cancer would be great!" – it’s admittedly not too much of a challenge to describe the would-be benefits of a cancer vaccine). In other similar exercises, it came up with some research questions and a list of objectives that actually looked like research objectives. The proposal itself was very short, not specific to any funder or scheme, and wouldn’t have been funded in a million years. But what did I expect? After all, I gave it very little to go on, and I asked it to come up with a research project to achieve something that’s not currently possible.

Being realistic

An eighteenth-century cookbook is reputed to have included a recipe for a pie that began “first catch your hare”. This, I think, characterises the scale of the challenge that’s asked of the research-proposal writer, be they a human or an AI. I’ve always maintained, not very controversially I like to think, that there are two key parts to producing a research proposal: designing and planning the proposed project; and writing about it in such a way as to sell it to a funder. They should never be tackled concurrently, and it goes without saying that the latter is only possible once the former has been completed. And, actually, it’s the former – designing and planning the project – that’s the really tricky bit. It requires pre-existing expert subject knowledge, original and insightful thinking, the skill of ideation and no small measure of inspiration, the ability to look beyond the state of the art to develop novel approaches, an appreciation of what’s possible and of what’s available in terms of resources, the ability to marshal those resources and assemble and co-ordinate a project team, along with an in-depth knowledge of the specific funder’s remit and priorities and an understanding of how the particular funding scheme is positioned.

All of this represents the substantial hare-catching exercise that must be undertaken before the proposal can be written. And it’s quite a challenge, even for a seasoned professor let alone a language-model chatbot. The best proposals that I see tend to be the ones where a whole load of behind-the-scenes work in terms of designing and planning the project has been done before a single word of the proposal was ever written. They’re great proposals not necessarily because the applicants are great writers, but because they are strategic thinkers who are capable of developing great ideas, and are really good at assembling a project and team around those ideas. Once all of that’s done – and I mean really done, with everything down to the last detail properly planned out – then it shouldn’t actually be too hard a job to describe it all in a research proposal. Sure, there’s a selling job to be done, an angle to be found and communicated, but that’s actually quite formulaic. And that may be where ChatGPT comes in.

At this point, it’s important to remind ourselves of the old computing acronym, GiGo – garbage in, garbage out. If we just feed ChatGPT with garbage – be that unrealistic expectations of an anti-cancer vaccine, vagueness and hugely-insufficient detail, or a combination of both – then we’re not going to get very much of use back out of it. We can't realistically expect it to undertake a large and complex writing exercise on the basis of a single, information-poor prompt, and get it anywhere near right first time. Moreover, we can’t expect it to have great and original ideas on highly-complex topics on our behalf. (It may be possible to take an algorithmic approach to generating genuinely competitive new proposals based on previous grants and some very specific parameters fed in by a human operator, but I’m still not sure that could really be described as original thinking.) This point is important. When I asked ChatGPT to write a cancer-vaccine proposal, it proposed as part of the approach identifying tumour-associated antigens and using advanced genomic and proteomic techniques. I didn’t include any of these words in the short instruction that I provided, so on the face of it ChatGPT would seem to have a combination of a vast amount of specialist knowledge, and an ability to think and thereby apply that knowledge in an intelligent way. It has indeed assimilated a large amount of knowledge (although it doesn’t currently have real-time access to the internet), but we need to remember that it has simply learned how to string together words in a way that resembles how the people who wrote the words it trained on typically put those words together themselves. So the word ‘antigen’ commonly appears in texts about vaccines, and words like ‘genomic’ and ‘proteomic’ may often appear alongside phrases like ‘cancer research’. ChatGPT is putting them all together in a way that appears uncannily like it’s intelligently and creatively generating research ideas of its own, but it’s not engaging in original thinking.

Nevertheless, what we can do is provide ChatGPT with richer input, and work with it iteratively to develop and refine its output. We’re moving away here from full ‘sit-back-and-relax-while-ChatGPT-writes-it-for-you’ automation, but I think there’s still real potential for the technology to provide us with a useful tool. Your hedge trimmer won’t cut your hedge on its own, but it sure does come in handy when you need to tackle the privet topiary – so long as you know how to use it (and how to use it safely).

ChatGPT as a tool, in the hands of the (human) user  

Firstly, it’s fair to say that ChatGPT is a good editor. If you ever struggle to write
concise, clear and compelling text that’s easy to read and puts across your ideas succinctly, then consider running chunks of it through ChatGPT. I’m quite often faced with proposals that contain some very lengthy sentences, where the writer sometimes appears to have got a bit lost mid-way through and descended into confused and confusing narrative that’s really hard to follow. Such proposals are often riddled with inconsistencies, strange grammar that not only irritates the grammar-pedants but also compromises readability, and minor typos that smack of a job half done. It’s always difficult to be really critical of one’s own written output – after all, you know what you meant to say – and it takes a lot of time for someone else to tidy it up. But ChatGPT writes good, clear and well-ordered English, and it can tidy your text up almost instantly. This might be particularly useful if you’re not a first-language English speaker. I took a paragraph from a previous (unsuccessful) proposal written some years ago by just such an applicant, and asked ChatGPT to edit it for readability. The result really was much improved. So consider putting bits of your proposal (perhaps a paragraph, or a few paragraphs, at a time) through ChatGPT with the instruction to edit them for readability or simply to improve them. Bear in mind that the system is (by the designers’ own admission) by no means infallible and could make mistakes of omission, deletion, misinterpretation, misrepresentation and so on. So do check all of its output carefully to make sure you’re happy with it in every way. Just the same as how you’d check whatever came back from any co-applicant, peer reviewer or copy-editor before incorporating it into the latest draft of your proposal.

Secondly, many of us are susceptible to the tyranny of the blank sheet of paper. Getting started on writing anything can feel daunting, and it’s often important just to get something down, even if it bears little or no resemblance to what will eventually become your final draft. With the right instruction, ChatGPT could write you a decent introduction to the topic for your background section. You’ll want to check that any facts and figures included are authoritative and up to date, and you may need to develop the focus in a particular direction (or get ChatGPT to do this). But it will give you a starting point, and something to work with. And even if you hate it, seeing a concrete example of what you don’t want can sometimes help you to identify what you do want.

For example, I asked (told? I try to be polite…) ChatGPT to “write an introduction for a UK research proposal about bowel cancer”. It spat out a few paragraphs, the first of which gave some useful general background. I continued the conversation and asked it to “expand the first paragraph in the above” – and hey-presto it did. Nothing to set the world on fire, but a decently-structured introduction with some useful facts and figures. Don’t expect it to show much in the way of insights as to what a particular funder or funding scheme will be looking for. I asked it to optimise the text it had produced for the UK’s Medical Research Council and it had a go, but with no obvious improvement evident in that respect.

A good title, sometimes with an accompanying acronym, is a small but essential part of a compelling proposal. And here again ChatGPT can help out. When given a research-project summary and asked to come up with a title, it did so. The result was a bit wordy, but then you can always ask it to produce a shorter one. Asked for an acronym it again came up with the goods – a bit naff at its first attempt, but better when asked to try again. An iterative ‘conversation’ with ChatGPT quickly refined the quality and suitability of its outputs. At the very least, you could use ChatGPT to generate some ideas for a title and acronym.

As noted, ChatGPT doesn’t do original thinking. But summaries don’t require original thinking, just the ability to condense the key points of some text down into a shorter form. ChatGPT seems pretty good at this. Give it the text you want to summarise, and a word-count limit, and see what you get back. If necessary, ask it to edit to make the summary more accessible to a lay reader. If it has de-emphasised or ignored key points then, once again, use further instructions to iteratively refine and improve. As with pretty much everything you get ChatGPT to write for you, treat the output as no more than a working draft and, at the very least, read it carefully before pasting it into the grant-application form. As well as checking for errors and omissions, keep an eye out for things that look like they’re out of date, and also for generic ‘fluff’ and waffle that’s more or less devoid of any meaning (many humans are guilty of writing this stuff too!). You’ll almost certainly get the best results by working collaboratively with ChatGPT and complementing its output with your own additions, edits, revisions and enhancements.

A vital part of any research proposal is a list of specific research objectives. Reviewers will expect them to be relevant to the project’s aim, achievable within the time-frame of the project, and readily measurable. Defining the project objectives can sometimes seem quite challenging. I found that ChatGPT was able to produce a list of mostly fairly credible-seeming research objectives when I fed it some text about the proposed approach and methodology. This is backwards-engineering of course – methodology should be designed with reference to the research objectives – but it may serve as a useful sanity check to ensure that everything in the methodology and objectives is relevant, appropriate and shows clear and logical mapping between these two key aspects of the proposal.

Is it cheating?


And so on. There will be ways of using ChatGPT as a tool to improve your proposals that I haven’t even thought of yet. But is all of this in some way unethical and immoral – cheating? Thinking back to our hung-over student, the essay is a test of their ability to find and gather relevant facts and information, marshal their thoughts, and produce some structured, thematic text that holds everything together in a logical manner. They will be marked and graded on their ability to do this. If someone or something has substantially done this for them, then the grade awarded won’t reflect their own abilities and will thus have been fraudulently obtained. In contrast, undergraduate students are not normally being tested and assessed on their ability to spell, so no one will object to their having used a spell-checker tool.

A research proposal, on the other hand, is fundamentally a marketing document for a research project. It will be graded by reviewers and panel members, but not of course with a view to awarding the applicant (or not) with some sort of qualification. It is not the piece of writing that’s being graded, but rather the importance, novelty, feasibility and overall excellence of the proposed research. Sure, a badly-written proposal may result in a poor score – it’s well-nigh impossible to review something that’s very difficult to follow and understand, and the benefit of the doubt is seldom given. But panels will also give low scores to well-written proposals that are simply lacking in novelty, for example, or that seem to be methodologically unfeasible. And of course the grading exercise here is simply a means of prioritising the best, most important and most relevant research ideas for funding. Funders just want to commission the research that’s most likely to support their own remit, vision, and strategy, and I don’t think they’re really too fussed about exactly who wrote the proposal and how***. So, provided the funder hasn’t updated their guidance to include a ban on using AI tools, then it’s surely in no way cheating.  

Me … or ChatGPT?

You might legitimately be wondering whether I actually wrote this blog post myself, or whether I had ChatGPT write it for me. And the thought certainly did occur. I typed “Write a blog post about using ChatGPT as a tool to write research grant applications” into the text-input box, and it spat out around 450 fairly anodyne words setting the scene, describing itself, and carefully caveating the piece by reminding me that it’s just a tool, that needs to be used judiciously with proper human supervision and input. But it did describe some concrete ways in which it could be of use to the grant writer – specifically by helping to draft a compelling introduction, by producing a rough draft of sections to be refined, and by improving the style, grammar and clarity of something that’s already written while providing feedback on how it has done this. In the interests of openness, I’ve included the complete text of what ChatGPT produced at the end of this post. Take a look and see what you think.

* I say ‘fairly’, because ChatGPT didn’t appear out of nowhere – it has predecessors and ‘older cousins’ that are also very impressive. But ChatGPT is arguably the first AI of its kind and on its scale to be unleashed on the public at large as an interactive AI tool that is, transparently, just an AI tool to play with.

** Other examples of cheating in different academic disciplines are available.

*** So long as, at its core, it's essentially the intellectual work of the applicant team. The partial exception here is fellowship proposals. Here, the individual is being assessed as well as their research project – but once again they’re not being assessed on their ability to write but rather on their ability to formulate a research idea and plan a project around it. So while the project and the ideas behind it must demonstrably be their own, I don’t think many grants panels would object to some support being obtained – from whatever source – with the process of writing about them.


Here’s ChatGPT’s attempt at this blog post. I asked it to write a blog post based on the subject that I gave it, and then I gave it back the text of its blog post and asked it to come up with a title for it. To be fair, I didn't spend very much time on the iterative-refinement process that I advocate above. It all seems fine to me, if a bit vanilla. I don’t disagree with any of it.

Streamlining Research Grant Applications: How ChatGPT Can Help Researchers Write Stronger Proposals

As a researcher, one of the most important skills you will need to develop is the ability to write a strong research grant application. A research grant application is your opportunity to convince funders that your research is important and worthy of support. It is a challenging task that requires careful planning, writing, and editing. Fortunately, there are tools available that can help make this process easier and more efficient. One such tool is ChatGPT.

ChatGPT is a large language model developed by OpenAI. It has been trained on a vast amount of text data, making it capable of generating natural language responses to a wide range of prompts. This makes ChatGPT an ideal tool for researchers who need help with writing research grant applications.

One of the most useful features of ChatGPT is its ability to generate text based on a specific prompt or topic. For example, if you are struggling to come up with a compelling introduction for your grant application, you can input a brief summary of your research topic into ChatGPT and ask it to generate an introduction for you. The resulting text can serve as a starting point for your own writing or as inspiration for further development of your ideas.

In addition to generating text, ChatGPT can also provide feedback on your writing. By inputting your grant application text into the tool, you can receive suggestions on how to improve your writing style, grammar, and clarity. This feedback can be invaluable in helping you craft a polished and effective grant application.

Another benefit of using ChatGPT is that it can help you save time. Writing a research grant application is a time-consuming process, and using ChatGPT to generate text and provide feedback can help streamline the process. By using the tool to generate a rough draft of your application, you can focus your time and energy on refining and improving your ideas.

Of course, it is important to note that ChatGPT is not a substitute for careful research and thoughtful writing. It is merely a tool that can assist you in the writing process. As a researcher, it is still your responsibility to conduct rigorous research, develop a clear and compelling argument, and present your ideas in a well-organized and coherent manner.

In conclusion, ChatGPT can be a valuable tool for researchers who are writing research grant applications. By generating text, providing feedback, and saving time, ChatGPT can help researchers produce high-quality grant applications that have a better chance of receiving funding. However, it is important to remember that ChatGPT is only a tool and should be used in conjunction with careful research and thoughtful writing. 


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. ChatGPT/OpenAI logos and branding remain the intellectual property of their owners. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog. 

Tuesday, February 7, 2023

A clear line of sight?

Note: Thanks to a lost password, I am locked out of my old blog and have therefore 're-built' the blog on this new account, where I am now periodically adding new posts again

A clear line-of-sight to therapeutic use will be required to secure funding.

‘Clear line of sight’ is a phrase that UK Research and Innovation (UKRI) has started to use in some of its themed funding calls, to describe how proposals should be positioned relative to eventual real-world impact. And it’s a phrase I rather like. Since seemingly forever, the UKRI research councils have required grant applicants to describe the anticipated impact from their proposed projects. And sometimes this can feel like a surprisingly tricky job, even in the life sciences where (eventual) real-world impact is nearly always the ultimate point of the project.

Why so tricky? Because it's often the case that a single research project, taken alone, won't lead directly to very much in the way of such impact. And thus the temptation can arise to over-promise and describe impacts that, realistically, are not actually going to happen as a direct result of the project. At the lower end of the technology readiness level (TRL) scale (or of the discovery-translation scale that’s sometimes used in the life sciences), much or indeed all of the meaningful direct impact of a project is often going to be academic impact. And, for some, this may beg the question – is academic impact in some way inferior to real-world impact? Does it count for less with a funder whose remit is to bring about tangible benefits for the population at large, and whose budget is far from bottomless?

So what's the point of the research?

Let’s think about both of these related questions for a moment - the point of doing the research and the value of academic impact. Imagine standing on a busy high street and polling passers-by, asking them to rank the importance and value of two hypothetical new scientific discoveries:

  1. That a singularity can in fact be observed from the rest of spacetime
  2. That a vaccine can be produced to provide general protection against cancer

I suspect that the overwhelming majority would plump for number two. (If it sounds far-fetched, well, yes it’s certainly proving a challenge; but they’re working on it, and artificial intelligence might just hold the keys to cracking it.) Even if we were to plead the case for number one by giving some additional information – “but that means the principle of cause and effect breaks down, and the laws of physics lose their predictive power!” – it would be very difficult for anyone other than a physicist to relate the (actually huge) importance of that hypothetical discovery back to real-world implications for themselves, their family and their friends. What has actually changed, as a result of this research and its findings, that benefits me?

Ah!” you might cry, “but it’s not the man* on the Clapham omnibus who funds my research!” But of course it is precisely the ordinary person who funds academic research, either through their taxes (e.g. UKRI) or through their donations (e.g. medical-research charities). Am I, then, attempting to steer this argument to the conclusion that real-world impact is indeed of more value than academic impact, because for the people who ultimately foot the bill it’s easier to understand and of far more obvious immediate relevance?

Absolutely not. I am going to take the wise precaution of moving away from theoretical physics at this juncture, safe in the knowledge that funders of research in that discipline are well aware of the value of fundamental research and appear to feel no pressing need to relate every discovery back to things that pretty much everyone can understand. Returning to the safer (for me) ground of the life sciences, it’s very evident that funders here too place considerable value on basic discovery research. But, for the most part, what they don’t have is the luxury of funding research that’s focused on making discoveries just out of curiosity, on addressing gaps in knowledge purely because they exist. Even the promise, abundantly backed up by precedent though it may be, that new knowledge is likely to prove useful in some way at some point in the future is unlikely to persuade life-sciences funders to part with their many-times oversubscribed cash.

Joining the dots


What’s needed then is a clear line of sight towards ultimate benefit for people’s health or well-being. A line of sight that, while it may inevitably include some imponderables and some things that are yet-to-be confirmed, has at the end of it real-world impact that’s sufficiently clearly defined and specific for a would-be funder to appreciate its nature, importance and magnitude. To achieve this, it’s vital to be able to articulate a clear understanding of what the next steps would be after the currently-proposed project ends, and how the outputs and outcomes from the current project would enable and facilitate those next steps. And of course it’s essential to have a really clear vision for why the project should be done at all – what’s the ultimate point from an impact perspective, and why in the context of that ultimate point is the research needed? On a pathway from idea to some form of implementation in the real world – be it new policy, new drugs, new healthcare practices or whatever – where and how does this project fit in? Why does it need to be on that pathway at all?  

Capturing the academic/real-world impact relationship in a proposal

Once we have this insight and understanding, and are able to articulate it clearly and in specific terms, we can produce wording to explain to a would-be funder what the line of sight to beneficial impact looks like. Taking a made-up (and no doubt hopelessly muddled) example, we might write something along the lines of the following:

While near-term impact will primarily benefit other academics working in the area of bacterial biofilm formation and dynamics by way of new knowledge and understanding, our ultimate impact aim is to inform the development of practical interventions to reduce nosocomial infection rates, with clear benefits for hospital patients. Specifically, we anticipate that by elucidating patterns of biofilm development by nosocomial pathogens under different growth conditions, we will facilitate the subsequent development of new therapies for preventing or disrupting colonisation and thereby controlling infection.

The above example contains little in the way of detail, and if it was a real-life example there is much in it that could be unpacked and expanded upon. But it does show that the writer of the proposal understands the point of doing the proposed research, from an ultimate real-world impact perspective. They have a clear line of sight to eventual impact in the area of hospital-infection control.

 * or, indeed, woman – although possibly that was an alien concept to the High-Court judge who invoked the hypothetical ‘Clapham omnibus’ traveller in a 1932 ruling on negligence.    


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog. 

Friday, February 3, 2023

The joy of specificity

Note: Thanks to a lost password, I am locked out of my old blog and have therefore 're-built' the blog on this new account, where I will periodically be adding new posts again

The project will deliver significant impact, which will be ensured through effective dissemination.”

We all love some good impact, and we know that the funders love it too. And they’re terribly keen on dissemination, so hopefully there are some more brownie points to be had there. But when I see phrases like the one above in a research-funding proposal, unencumbered by any additional information – and believe me, I frequently do see them – my heart sinks. The total lack of detail suggests that the applicant either doesn’t really care about impact or dissemination, or hasn’t actually given them any thought. Quite possibly both. A good reviewer will always pick up on such a lack of specificity, and their response to it will not be positive. The same goes for any other areas in which the reviewers are expecting some substantive information, but are instead just given bland assurances with little or no detail to back them up.

When I review research proposals, two words that seem to crop up time and time again in the comments I write are ‘specific’ and ‘specificity’. Always in the context of encouraging more of them. The Oxford English Dictionary gives us the following helpful definition:

 

Specific

adjective

clearly defined or identified

 

If you’re serious about doing something, and committed to doing it properly, then what better way to convince someone of that than to identify and describe clearly the details of exactly what you will do – and perhaps how and when you’ll do it?

I know that space in the proposal document is always limited, and that there’s always so much to cover. But, assuming for a moment that you have thought through all of the various diverse proposal details, cutting out the specifics of stuff that the funder really cares about is really not a good strategy for success.

Below are some areas where I commonly find specifics to be lacking, but where at least some degree of specificity is pretty much essential.

Impact

‘Impact’ means things that change, usually (or at least hopefully) for the better, as a result of a research project or programme. It can happen over different timescales, and in different domains – in the academic world and in ‘real-world’ domains such as the economy, society, health and wellbeing. Whole books have been written about research impact, and it isn’t even the main focus of this blog post, so what follows will necessarily just be a brief overview. But when it comes to writing about the impact that you hope and anticipate your project will have, be sure to give some details about the following where they apply.

So, for academic impact:

  • Who will benefit – researchers and scientists working in which particular disciplines and fields, and on what particular topics? If possible, name some of the nationally- and internationally-prominent institutions, centres, groups and perhaps even individuals in these disciplines and research areas – these will be among your key academic beneficiaries.
  • How will they benefit? How, specifically, will the new knowledge, results, data, methods, models, approaches or whatever it is you hope to produce in your project help and advance them in their work? What will they be able to do (or do better), that they cannot already do now?

Perhaps you envisage that your project results will have specific benefits for other scientists working in areas where there will be downstream real-world impact in the future – if so, then spell this out.

For health, economic and societal impact:

  • Once again, who will benefit – which specific groups of people (for example patients with a particular medical condition and their families; and/or professionals working in a particular field)? What about industry sectors, possibly even individual companies? And other organisations, such as the third sector – perhaps including some specific charities? Government is often a beneficiary of research, particularly where the research is likely to influence policy, regulatory matters and/or public-sector practice. Which specific sections of national or local government will benefit?
  • Also once again, how will the benefits be felt? It’s not enough simply to list beneficiaries, and hope that it’ll be obvious to the reviewer how they will benefit from your research. Spell out exactly what you anticipate will change as a result of the research, and over what timescales, in the context of a problem that you have identified. No one expects a three-year project to completely solve a major problem, but reviewers will expect some specifics in terms of changes (or precursors to change) that are realistically achievable.


When it comes to the ‘who?’ of your beneficiaries, avoid using broad, unqualified terms like ‘stakeholders’ without explaining exactly what you mean by them. You should leave the reviewers in no doubt as to who your project stakeholders are and why they hold a stake in your research and its outputs. Words like ‘practitioners’ and even ‘policymakers’ can be similarly vague if you don’t make clear the particular area/s in which they’re practising or making policy.

Remember, a description of specific impact (whether academic or real-world) is a powerful answer to the question “what’s the point of doing this research?” If you’re unable or unwilling to address this question clearly by giving some specific details of your project’s anticipated impact, then you definitely can’t expect your reviewers to do it for you. They’ll be asking that question, not answering it.

Dissemination and impact-maximising activities

Dissemination is not impact. But making sure that the people, groups and organisations who will use, act on and benefit from your research and its findings actually get the information they need is normally a vital step towards achieving and maximising impact.

As with impact itself, specificity – or at least some specific examples – is vital for demonstrating that you’re serious about communication and dissemination, and have given them some proper thought. Having identified who, specifically, will benefit from your research, funders will expect you to have planned some targeted communication and dissemination activities, designed to ensure that the potential beneficiaries are given everything they need to realise those benefits. If your research is focused on informing policy in a particular area, for example, then you’ll need to make sure that the findings reach the relevant policymakers; if your vision for impact is to change practice, then likewise it’s vital to ensure that key decision makers and probably also practitioners in the area of focus are kept informed. When it comes to academic impact, you’ll want to ensure that all the right academics and scientists get to hear about your findings.

Conferences and publications are the bread and butter of dissemination to fellow academics and sometimes to professionals and practitioners in the area of focus. So which conferences will you be attending, and why have you selected them? When it comes to publications, which journals will you be targeting, and once again why? From a dissemination standpoint, the answer to the ‘why?’ question here will relate to each conference’s or journal’s potential to reach your target audience/s. So you’ll probably have picked the conferences and journals that are best attended, most widely read and most influential among the people and groups you want to reach.

Be specific about other aspects of your plans for dissemination and engagement. Don’t just allude vaguely to ‘impact activities’ – spell out what these will be. And don’t hide behind vague and generic terms like ‘stakeholder impact event’ – describe what form it will take, and which stakeholders you will be targeting. Similarly, when it comes to project outputs aimed at informing beneficiaries, try to give a flavour of what they will be. ‘Detailed summary of findings for practitioners in older-adult social care’ is helpfully more descriptive than just ‘Stakeholder report’.

Research objectives

It’s often said that research objectives should be SMART, where the ‘S’ stands for ‘specific’. (The remainder of the ‘SMART’ acronym is accounted for by ‘measurable’, ‘achievable’, ‘relevant/realistic’ and ‘time-bound’.) But what does this actually mean in practice?

A reviewer will scrutinise the list of objectives with the aim of determining whether they’re do-able, whether it will be clear and apparent when each objective has been completed, and whether completing each objective represents a sensible and appropriate step towards achieving the overarching project goal. They can only evaluate the research objectives on this basis if they are all properly specific.

For example, to borrow from my oft-used (if rather simplistic) housebuilding analogy, ‘to complete kitchen floor’ really doesn’t cut the mustard as a specific objective. How will people know when the kitchen floor has definitively been completed? Indeed, will everyone agree that it has been completed? What was the original specification, against which everyone can compare the supposedly-complete floor?

If on the other hand the objective had been ‘to tile entire kitchen floor area with large-format tiles as supplied, laid direct to cement screed and grouted, with all work completed to BS 5385’, then there’s little room for ambiguity. Everyone can agree whether or not the work is complete. The completeness or otherwise of the work is measureable, and completing this particular (and very specific) objective is demonstrably a relevant key step towards the project’s overall goal of refurbishing the kitchen.

project aim is not the same as a research objective, but specificity matters here too. ‘Improve outcomes for leukaemia patients’ is a laudable goal, but it lacks specificity. On the other hand, ‘reduce diagnosis delays in leukaemia by developing a new streamlined rapid-diagnosis pathway’ is much more specific, and gives the reader a real essence of the project in just a handful of words. Sometimes a two-part aim statement can work well, with a more general part followed by a specific part. For example: ‘To improve outcomes for leukaemia patients presenting with non-specific symptoms, by reducing diagnosis delays through development of a new rapid-diagnosis pathway’.

Very similar principles apply to project titles, where it’s really important to give a clear and concise ‘nutshell’ indication of what the project is about.

Explaining budget costs

When it comes to justifying your requested resources, which many funders will require you to do, specificity is once again to the fore if you’re going to satisfy the inquisitive reviewer and convince them that the costs you’re claiming for are real and necessary. Avoid lumping various diverse costs under a single sub-heading, such as ‘consumables’, without any further breakdown or explanation. Instead, set out for the reviewer what the various different types of consumable will be, giving quantities and a proper cost breakdown for each category. If you’re claiming for travel, explain how many people will be travelling, and give details of where, how and why. The same goes for things like conference fees.

Much of your budget is likely to relate to staff time – your own, that of your co-investigators, and the cost of employing postdoctoral researchers and technicians – and here again the key to justifying these costs is to be specific. What will people be doing in the project, and how much of their time will it take? What salary scales and grades will researchers and technicians be on, and why are these justified? Specificity. Specificity. Specificity.

Project and risk management

With apologies for starting to repeat myself, this is another area where some specificity is needed but is often lacking. In terms of managing the project and ensuring that milestones and deliverables are accomplished in a timely manner, who, specifically, will do what – and (if applicable) how and when (or how often) will they do it? Give details of responsibilities and task ownership, and the management structures that will be in place to steer the project. When it comes to identifying project risks, specificity is vital. What are the particular research and project risks that you have identified, how likely and serious are they, who will ‘own’ them, and what can be put in place to mitigate them?

Data management

Few enjoy writing about managing the data used and produced by their proposed research project, but it’s not uncommon for funders to ask for a data management plan. Lack of interest in (and perhaps knowledge of) the subject of data management may result in the temptation to write something anodyne about ‘storage on secure servers’ and leave it more or less at that, but a proper data management plan will require a fair amount of specific detail. What types of data, for example, will the project handle and produce? What formats will the data be in? What will the volume of data be? How will you ensure that data-quality standards are met? How and where, specifically, will data be stored, backed up and curated? What metadata standards will be used for the data, and what documentation will be in place? How will data be archived, preserved and shared? And so on, addressing some very specific points that may need to be covered in some detail.

Specificity – everywhere!

I’ve covered here a handful of the areas in which at least a degree of specificity is required, but where grant applicants frequently resort to rather bland, generalised statements that are free from any specific details. But as a rule, it’s wise to include some specifics wherever possible, since doing so will always reduce ambiguity, demonstrate your commitment and attention to detail, indicate thoroughness of planning, and generally strengthen your research proposal. Increasing, of course, its chances of success.


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog. 

Ten possible outcomes but just one of them positive: Navigating the evaluator's internal flowchart

Note: Thanks to a lost password, I am locked out of my old blog and have therefore 're-built' the blog on this new account, where I will periodically be adding new posts again

Perhaps unsurprisingly given that I’m a bid writer, I spend quite a lot of time trying to get inside the mind of a ‘typical’ proposal evaluator (reviewer or panel member). How does she really think? What is the decision-making process that concludes in him liking or not liking a proposal? And how can we influence that process?

Is there even such a thing as a typical evaluator? In my more ideological moments, I like to think of them as super-humans with almost mythical levels of intelligence, able to scan through large piles of proposals with astonishing speed and assimilate their respective merits and demerits without favour or bias before ranking them perfectly according to a complex matrix of criteria. Such is their accuracy and consistency, any two of my super-human evaluators would rank that pile of proposals in exactly the same order.

Evaluators are human too...

I’ve never sat in on a grants panel, nor reviewed proposals for a funder, but rumour has it that the above characterisation may not be entirely representative. Certainly, we’ve all seen instances when two reviewers’ opinions on the same proposal appear to be diametrically opposed. And while the panel itself may subsequently impose a measure of sanity in such situations – the wisdom of crowds in action, perhaps – some of those individual panel members may not be quite as ‘on-it’ as we’d like to think they are. I’ve seen it suggested – admittedly with tongue at least partly in cheek – that there are often three identifiable types present on a grants panel:
  1. The one who knows everything: Has a robust opinion on all the proposals, dominates proceedings and seems to know a great deal about many things
  2. The perpetually-baffled one: Seems to struggle with many aspects of all the grants under review, leading to lots of questions and difficulty forming a definitive opinion
  3. The slightly scatty one: Appears to be at least four grants behind the panel as a whole, and may not be as familiar with each of the proposals as perhaps they might be

Of course, if you’re a panel member yourself then I’m sure you don’t fall into any of these categories. But perhaps you’ve encountered them? How then to write a proposal that accommodates all the different types of evaluator who might review it?

Flowcharting the evaluation process
My suggestion would be to pitch the proposal at the ‘super-human evaluator’ who is rigorous, informed, methodical and objective in their appraisal. Make sure your proposal covers all of the important criteria that they’ll be looking out for and using as the basis for their evaluation. To this end, I have had a go at flowcharting their decision-making process, and I think it should probably look something like this:

You are, in essence, telling a story, and it’s vital that the story you tell in your proposal weaves in all of the essential points that will guide the evaluator to the decision you want them to make.

So what about catering for those less-than-perfect evaluators, like the three caricatures we met above? My advice would be to ensure that your proposal is presented as clearly and as nicely as possible. By which I mean well-ordered thoughts and a logical narrative flow; clear and straightforward language that is no more complex than it needs to be; and presentational niceties such as hierarchical headings, white space, figures, and so on. The sorts of things, in fact, that I consider in some of my previous blog posts.


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog.

The power of a good story (and that all-important ‘However…’ clause)

Note: Thanks to a lost password, I am locked out of my old blog and have therefore 're-built' the blog on this new account, where I will periodically be adding new posts again

There’s nothing in the world more powerful than a good story. Nothing can stop it.”

So said a chap named Tyrion Lannister who, my internet tells me, is a character from a TV programme called Game of Thrones. And he’s right, I reckon, at least when it comes to grant applications. Sure, many funders state that the overriding factor they use when they make funding decisions is the excellence of the science, and indeed the science in your proposal (assuming it’s in a scientific area of research) will definitely need to be first-rate. But to grab the reviewers’ attention and sell them your science over and above the other competing proposals, you’ll need to embed your description of it within a very clear and compelling story. A story that explains very clearly what the point of doing that science is – why it really matters, and how we stand to benefit from it. 

When I was in primary school, I was taught that stories are like fish. They have a beginning (the head), a middle (the body) and an ending (the tail). I’m not sure whether the extent of my English teacher’s knowledge of piscine anatomy would stand up to close scrutiny, but I like the idea of identifiable key components without which a story is incomplete.

In the case of a research proposal – certainly the types of life-science proposal with which I’m most familiar – I’d suggest that the story will often break down into the following components:

The ‘problem’ statement
An identifiable problem, which could be a gap in current scientific knowledge and/or a ‘real-world’ challenge relating to society, public health, wellbeing or whatever.

An explanation of why the problem is important – its size, nature, severity and impact on the people whom it affects.

The ‘current state of the art’ overview
A description of where we are now in this area of science and its application – what we already know and can do, and how that has had benefits to date for the scientific field and for the domain of real-world impact on which the proposal is focused (for example, using blood biomarkers for cancer diagnosis). Wherever possible and appropriate, it's very valuable to be able to point to some promising preliminary data that sits at the cutting edge of the science. 

The ‘however’ clause
This is critical – it’s fundamental to justifying why the proposed research needs to be done. Great and highly promising though the current science may be, this is the big ‘but’. A description of what we don’t yet know, what we can’t yet do, that prevents us from achieving so much more in this particular scientific area. If only we could move the field on…

The ‘leap forward’ description
Having prepared the ground by setting out the above points in clear terms, this component of the story is a vital part of the jigsaw – a description of a novel and compelling idea for moving forward, overcoming the current obstacles, plugging the knowledge gap and advancing the field. The aforementioned preliminary data often plays a critical role in giving credence to this novel idea. 

The ‘impact’ promise
Just outlining the problem is not enough – here we state specifically how and to what extent we will address the problem. Scientific leaps forward are great, and in an ideal world we’d fund them all just for the sake of curiosity and the notion that producing new knowledge is a worthwhile end in itself. But funding and resources are limited, so without wishing to sound too grand we need to select those proposals that promise most benefit for mankind. Specificity is the watchword here – what, exactly, do you intend will change as a result of the research?

In an applied proposal, you will be describing things like new practices, new processes, new guidelines, changed policies, perhaps new products (for example drugs or medical devices). They don’t have to change or come into being directly on conclusion of your project, but you’ll need to describe how the project will advance the status quo towards those changes ultimately being realised.

In a fundamental-science proposal, much or all of the immediate impact is likely to be academic. So in the life sciences at least, you’ll probably be describing how the new knowledge you deliver will support and advance the work of other scientists who are undertaking impactful research in specific areas.

And did I mention specificity?

The ‘timeliness’ reinforcement
Why do this research now? If it’s such a good and promising idea then why has no one done it before? And even if it couldn’t have been done until now, is it really so urgent? This component of the story explains why your idea’s time has come (perhaps the technology just didn’t exist a few years ago) and describes, without hyperbole, why the work must now be done without delay (perhaps the important problem you have identified is escalating rapidly).

The remit reminder
This is a focused explanation of why, specifically, the funder should care about all this. Every funder has a remit area and strategy for supporting research, and as much as something may be of general concern to the world as a whole, if it’s not within a funder’s remit then their interest in it will be limited. It may well seem abundantly obvious by this stage that the proposed research and its intended impact will fall within the funder’s remit, but spell it out clearly for the reviewers. This part of the story should refer to specific elements of the funder’s mission, remit and strategic focus, and explain explicitly and convincingly why and how the proposed research will support these.

Building the story
You won’t necessarily assemble these key components in the same order as above and they won’t all necessarily be self-contained chunks of narrative – some may serve as a thread that permeates the proposal as a whole. But in the life sciences at least, the majority of strong proposals are likely to be underpinned by all or at least most of these basic components.

There are of course other components to the story. You’ll note, for example, that I’ve barely touched above on the scientific detail and methodology, and of course without these you don’t have a research proposal. But, for our purposes here, this aspect of the proposal is all actually – believe it or not – largely secondary. It sits adjacent to the ‘why bother?’ story, and its purpose is to establish the credibility of your big idea. Anyone can say they’ll change the world, but if they propose to do so by witchcraft and magic alone then they may not be taken very seriously by those who hold the purse strings.

It’s no coincidence that the storyline building blocks I’ve outlined above would underpin a strong lay summary. I maintain that in the life sciences at least, it should probably be possible to boil down almost every research proposal to a handful of clear, strong statements from which my daughter (who is in Year 7) would grasp the essence of what the proposal is trying to achieve and understand why that matters in the real world.

This story-based narrative provides a central framework upon which to hang many of the essential parts of a research proposal. An overarching aim, for example, will be framed with clear reference to the problem you have identified and how you seek to address it. The main hypothesis and research questions will address the particular gaps in knowledge that you have identified, and the research objectives will outline the specific steps you will take to achieve the project’s aim. Drilling down further to the methodological detail, the description of this should be organised in such a way as to explain in practical terms how you will achieve each of your research objectives.

So: Get the work-plan done and you’ll achieve your objectives. Achieve the objectives and you should deliver against your aim. Accomplish the aim and you’ll have made an impact against the problem. Which we already know is important, matters to the funder, and needs to be tackled now.

There are usually other parts to the story that need telling, some of which may be peripheral but all of which are nevertheless important in their own right. How, for example, will you archive and share your data? What concrete steps will you take to maximise impact? If you’re applying for a fellowship then there’s a second story to tell that’s as important as the story behind the project – one about you and your motivation, your long-term career plans, and why fellowship funding is essential for realising your full potential as a researcher.

A real example – a great story in action
Below is an example of a grant that was funded a few years ago by the Medical Research Council (MRC). The project’s title, The ‘Medical Bypass’: a new treatment for obesity and diabetes’, gives a strong hint as to what the story is about. You can read a bit more about the research on UKRI’s Gateway to Research (GtR) website, but in a nutshell:

Obesity is an important problem (it affects one in four people in the UK, and it’s getting worse). It’s a major cause of diabetes and other serious diseases. (The ‘problem’ statement, with a ‘timeliness’ reinforcement.)

We have just one anti-obesity drug, which is not very effective. We do though have gastric bypass surgery, which works well for treating obesity and diabetes. (The ‘current state of the art’ overview.)

But gastric bypass is expensive, irreversible and not without risk, having a 1 in 300 mortality rate. (The ‘however’ clause.)

This research will set out to prove the concept of administering satiety hormones (part of our body’s signalling mechanism that tells us when we’re full) to achieve similar results to a surgical bypass. (The ‘leap forward’ description.)

Ultimately, the goal of the research is to develop an effective treatment for obesity and diabetes that is safe and cost-effective. Without such treatments, obesity levels in the UK are projected to reach 50% plus by 2050. (The ‘impact’ promise, plus more timeliness.)

The short information provided on the GtR website doesn’t explicitly include a remit reminder. It does though describe how the research, which was to be done using rats, was designed to justify a trial of the proposed ‘medical bypass’ approach in overweight patients. This, together with the focus area of the project (treatments for obesity and diabetes), leaves us in no doubt that the research is squarely within MRC’s remit. If the research had been of a more fundamental-science nature then a strong and explicit remit statement would probably have been essential.

The research itself involved measuring levels of gut hormones – peptide YY(PYY), glucagon-like peptide-1 (GLP-1) and oxyntomodulin (OXM), and a good description of the project's science would have been absolutely essential for enabling the reviewers to evaluate the credibility of the idea and assess the quality and feasibility of the project. But while the science might well have been the part of the proposal that interested the applicant most, it’s worth noting that it barely features within the central ‘good story’ that makes a compelling case for funding the project and getting the science done.


The views and opinions expressed in this blog are mine alone and are in no way endorsed by my employer. Factual information and guidance are provided on a 'best-endeavour' basis and may become out of date over time. Web-links were correct at time of writing but commonly go out of date. No responsibility can be taken for any action or inaction taken or not in respect of the content of this blog.