Should Grant Funding Be Awarded Through a “Lucky Draw”?

Yes, you understood correctly…a lottery system to award grant funding to academic researchers. Any academic researcher is aware of the competitive nature of grant funding. However, traditionally one was awarded grant funding after proving that you are a reputable scientist and the best candidate for the grant.

Academic researchers rely on research funding to carry out experiments and fund their students. Science is expensive. It requires expensive equipment and expensive reagents. Grant funding is vital to going above and beyond to test those theories and push the research that will make a difference to the world we live in. Recently, some reputable funding agencies decided to use a lucky draw to determine who to award funding to. Relying on “chance” goes against the way scientists think. The only scientific reason to use “random selection” is to prevent bias when selecting subjects for an experiment, not for making decisions as important as which project gets funding. Why has this happened?

Why are Funders Using a Lucky Draw?

It turns out that reviewers are having a really tough time deciding whose grant application is the most deserving. They get so many applications of great quality and great importance, that they simply cannot decide who to award the grant to. Therefore, they screen the applications and decide on a short list. The short-listed applications are all of equal quality and relevance. More importantly, they meet the minimum requirements decided upon by the funding agency. They then use a “lottery” or “lucky draw” system to randomly select the project that will receive the funding.

The Lucky Draw Process

The “lucky draw” process promises to be more efficient than the traditional system. The reason is that despite a well written and thought out grant application, there is no guarantee of the success of the research project. The funding bodies suggest that once an application meets certain criteria, funding bodies may as well toss a coin to decide who should receive the funding. Therefore, luck is already part of the process.

This is how the lucky draw process works:

  1. Funding bodies decide on the criteria for their grants
  2. Applicants write a grant application
  3. The applications go for traditional peer review
  4. Applications that do not meet the minimum requirements get sent back to the applicant with reasons for rejection.
  5. Applications that merit funding get allocated a number
  6. A random selection either by drawing numbers out of a hat or using a computer system is carried out.
  7. “Unlucky” applications go back into the next draw.

Funders Already Using the Lottery System

The high- and low-quality applications are easy to rank, however, those that are in between these two ranks, i.e. mid-quality applications, are more difficult to rate. Here we list some reputable funders that have used this “lottery system”:

  • Swiss National Science Foundation (SNSF): Recently gave this random selection process a test run.
  • Health Research Council of New Zealand: Have been using this system since 2015 and say the traditional process is inappropriate. They feel fresh ideas are encouraged using this system.
  • Volkswagen Foundation in Hannover: Have been using this system since 2017 for some of its grants.

The Benefits of Taking a Gamble

The “lucky draw” approach takes human bias out of the process. The Bill and Melinda Gates Foundation found that reviewers tend to score proposals written by women lower than those written by men. This even occurs when the details of the applicant are unknown. This study discovered that the reason for this is that women use more specific language whereas men tend to write using broader terms. Women tend to be more cautious in their claims, whereas men tend to make more sweeping claims. Reviewers seem to prefer the bolder way of writing.

Another study discovered that reviewers that were chosen by the grant’s applicants were four times more likely to rate an application as excellent. Other unintentional biases creep into the system, namely:

  • Male reviewers tend to give higher scores than female reviewers
  • Academics aged over 60 years gave better reviews than younger academics
  • Reviewers affiliated with certain institutions gave higher ratings.

The “lucky draw” approach also saves time. Traditionally, funding bodies need reviewers to review grant applications. This is followed by time-consuming face-to-face meetings by a panel of referees where they decide who to award the funding to. Time-saving strategies such as video grant applications rather than the traditional written ones have been considered. Using the “lottery” system means panel selection meeting time is saved and can be used instead of valuable research.

The “lucky draw” approach is also better for a researcher’s self-esteem. This works both ways. Academic researchers whose applications make the lottery stage but are unsuccessful may simply feel “unlucky”, rather than unworthy. Being unsuccessful in the traditional system makes researchers feel as if their work is not good enough. On the other hand, researchers who are successful in receiving funding in this way do not feel entitled, since their success was based on luck.

Science Should Not Be a Gamble

While I fully understand the reasons behind the lottery approach, it still concerns me that an application may not get funding due to random selection. Could a valuable project get ‘unlucky” and the research community misses out on novel research? I would imagine the grants have predefined criteria and fields of research they wish to fund. For example, a particular grant might want to fund one aspect of one particular type of cancer. This would cut down the selection to the most relevant applications, followed by the next most promising application.

Continuously “unlucky” researchers may be disadvantaged by this system. Imagine the frustration of a novel research project not being able to get funding purely because of bad luck? Do you see any merit in a “lucky draw” system for grant applications or do you think there is a better approach? Let us know in the comments section below.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    Which among these would you prefer the most for improving research integrity?