Fixing the Back Burner
Who are the angel investor equivalents in science? Where are the people who make lots of early bets on unproven people and high-risk, high-reward experiments?
My initial hypothesis: there aren’t any. Federal agencies serve other roles. Defense Advanced Research Projects Agency (DARPA) does risky, but not lots. The National Science Foundation (NSF) and National Institute of Health (NIH) do lots, but they’re more risk averse. Science philanthropy fills some of the gaps, but administrative requirements can be a barrier to high volumes. The science angel is a missing persona in the science funding ecosystem, despite the positive example we’ve been given from the startup realm. Given that, science funding should go small and wide to support lots of researchers in novel directions.
Owing to good feedback, I’ve modified my hypothesis: small, risky bets and experiments are happening, but still not enough. They operate primarily through an informal “back burner” system in which Principal Investigators (PIs) squirrel away bits of money and time from existing grants to explore new projects or fund graduate student ideas. Formalizing the back burner system will make it bigger, better, and useful to more people.
What changed my mind
I created and shared this diagram on Twitter in hopes that I would be corrected and pointed to an overlooked resource:
What was I missing? Who operates in that lower righthand quadrant for science?
The first response I got was from someone deeply familiar with the current funding schema:
“The NSF already does small funding.”
This is true. The NSF runs a small grants program (which they define as less than $300k) — an alternative track of sorts — for potentially transformative research. Their definition:
The EAGER funding mechanism can be used to support exploratory work in its early stages on untested, but potentially transformative, research ideas or approaches. This work could be considered especially "high risk-high payoff" in the sense that it, involves radically different approaches, applies new expertise, or engages novel disciplinary or interdisciplinary perspectives.
The process for getting these grants involves a briefer proposal and, importantly, skips the requirement for external (peer) review:
Exploratory proposals may also be submitted directly to an NSF program. Principal Investigators (PIs) must contact the NSF program officer(s) whose expertise is most germane to the proposal topic prior to submission of an EAGER proposal to determine the appropriateness of the work for consideration under the EAGER mechanism.
In FY 2019, the NSF issued roughly 700 EAGER/RAPID grants , totaling close to $80M (~1% of the total NSF budget amounting to ~5% of total grants). The EAGER program (RAPID is a slightly different program focused on urgent issues) accounted for 310 of those grants, which totaled $67M. But it thins out dramatically as you go smaller. Only 14 of those grants were issued for less than $50k, cumulatively amounting to less than $540k. This means one of two things: there isn’t much interesting science happening for <$50k or the administrative burden on program officers becomes too high at smaller amounts. It’s probably the latter.
Many scientists, especially early-career folks, tell me they can go far with $50k (or somewhere with $20k, even $10k!) and every program officer I’ve talked to confirms the administrative problem. The volume is untenable. Research analyzing decades of the NSF’s Small Grants for Exploratory Research (SGER) program, which was the predecessor program of EAGER, show that program officers “remained risk averse” by spending far less than the allocated SGER funds, even when more than 10% (!) of the grants were tied to “transformative results as measured by citations and as reported through expert interviews and a survey” (Wagner and Alexander).
The SGER program began due to criticism of the peer-review process. They tested an alternative idea, and it worked! Those results — a 1/10 shot at transformative results — should be eye-grabbing to every decision and policy maker in science. The NSF does go small, but there is likely more to be done.
But that isn’t the whole story with regards to small bets. Another comment on my diagram, this one from Trevor Blackwell:
Most of the high-risk small-dollar science funding happens out of professor's discretionary budgets…
I asked the science economist Paula Stephan to confirm. She did, and she had given it a name: the “back burner” system. Researchers always have something on the back burner — side projects they’re trying to get off the ground for future funding opportunities. It’s far-sighted and sometimes informal budget maneuvering. This activity, she told me, is ubiquitous.
What to do about it
The data from SGER and the fact that researchers need to engage in back burner activity suggests a major inefficiency in the scientific funding apparatus. I’m going to quickly gloss over a number of speculative implications, each probably worth their own full post and analysis:
The back burner system is probably good in that it keeps a high degree of experimental autonomy close to the scientists.
The back burner system is probably bad in that it works well for established scientists and terribly for unestablished scientists, perpetuating an increasingly inequitable funding ecosystem.
Something about the culture or process of the NSF is creating an environment of risk aversion around small bets.
Rather than spend time speculating, I’d rather jump straight to what we can and should test.
Start small
A number of the leading researchers studying the science of science funding have come to a strikingly similar conclusion: we should be applying a portfolio approach to science funding and we should be bolder in empirically testing new ideas (Stephan, Azoulay, Ricón). The problem — and this is where much of this research seems to get stuck before it can be tested — is that it’s hard to measure the returns to scientific investment. Bibliometrics alone aren’t cutting it. But that shouldn’t stop us from trying new things. From Azoulay:
First, there is great worth in maintaining a diversity of approaches for grant making. The analysis of grant systems should therefore be approached as a portfolio evaluation problem. A crucial activity for science policymakers is therefore the identification of gaps in the ecosystem of funding.
A straightforward way to identify these gaps is by overlaying the financial investment map — a system with a clear metric for measuring return — and see what’s not there. To my eye, that’s the missing science angel.
We should double down on the EAGER model and the back burner system, but in a way that’s more transparent and inclusive. And we could probably do it for cheaper than most people expect. Again, we have examples from the financial world.
A new type of venture investing emerged in 2005 when Y Combinator invented the model of the startup accelerator. Instead of investing large amounts of capital into a handful of companies, Y Combinator had the idea to invest a much smaller amount into a larger number of companies and provide them with mentorship and coaching during a fixed-period, cohort-based mentorship program. The experiment was founded on the hypothesis that the costs associated with starting a technology company had fallen dramatically and that this new model of investing was now feasible. From Paul Graham’s initial essay on the hypothesis behind starting Y Combinator:
Like everything else in technology, the cost of starting a startup has decreased dramatically. Now it's so low that it has disappeared into the noise. The main cost of starting a Web-based startup is food and rent… The less it costs to start a company, the less you need the permission of investors to do it. So a lot of people will be able to start companies now who never could have before.
They were right. As of January 2021, Y Combinator companies had a combined valuation of more than $300B and had created more than 60k jobs. From a financial standpoint, going small was a great investment.
There’s evidence that the cost of starting an experiment is dropping, too (which is worthy of a longer and separate post). We should have a Y Combinator for science.
The Experiment Foundation
We’re currently testing these ideas by building on the progress of Experiment. Experiment has grown into the largest science crowdfunding website in the world, with nearly 1,000 projects having raised nearly $10M to get new research projects off the ground. It’s already a great platform for small dollar projects (less than $10k) but we think there’s room to grow and, more importantly, to scale. The Experiment Foundation is starting a new program to fund and support science angels. Here’s how it will work:
Foundations and companies seed fund the program. A group of scientists are selected and each are given a budget of $50-100k to contribute to other’s projects on Experiment. They are free to use their discretion in how they recruit, select, and allocate that amount. At the end of a one year period, they will have a portfolio of experiments to show their work. All of this is done in public on Experiment, with minimal overhead and the potential to leverage additional support from the crowd. Based on past data, we estimate those $100k budgets will turn into $250-500k in funded science and 50-100 supported projects. The program will be evaluated using the same (or as close as possible) methodology used for the Wagner and Alexander assessment of the SGER program.
This program builds on the best of the back burner system by putting money and discretion closer to scientists. It fixes some of the pitfalls by supporting researchers and research that the existing system misses (a peer-reviewed study found that the Experiment flipped science’s traditional reward model). Importantly, by employing the techniques of platform philanthropy, the program will avoid the administrative burdens usually associated with high grant volumes.
Our goal is simple: widen the onramp for novel research and bold new questions.
Please stay tuned for updates and news. If you’re a researcher, please bring us your “back burner” projects. And if you’re a foundation or company who wants to test this with us, please email. Comments, questions and feedback best sent via Twitter.
Consensus view: Science funding could and should be improved.
— David Lang (@davidtlang) March 16, 2021
There is not consensus, however, on how to do it. Here's my best idea along with a proposed experiment to run: https://t.co/DrSFVmNR84