You may have heard the saying, “never let a crisis go to waste” – an axiom that has been invoked by leaders from Winston Churchill to the Obama Administration’s Rahm Emanuel.
Together with his coauthor Bhaven Sampat of Columbia University, Gross has recently studied the U.S. policy responses to two distinct crises: World War II and the COVID pandemic, and detailed his findings in the paper, “Crisis Innovation Policy from WWII to COVID-19,” published in the National Bureau of Economic Research’s Entrepreneurship and Innovation Policy and the Economy annual volume.
Gross explains potential lessons for managers and policymakers in producing innovation both in crises and in times of stability, in the Q&A below.
You study how governments unlock innovation in times of crisis. Why is this important for business to understand?
There are two ways to think about this question. The first is the direct view: models for how firms can prepare for and manage through a crisis, which to us means an unexpected, acute, major challenge where innovation may be part of the solution. Think, for example, the U.S. auto industry in the 1970s oil crisis, or firms with supply shortages in the COVID pandemic. The second is the wider view: history is punctuated by crises, and it’s important to understand how technological possibilities change with them.
What makes a crisis different from ordinary times is that we face “exploding” problems whose costs will accelerate if not resolved quickly. This was certainly the case in the pandemic. It was also the case in World War II, and is the case for climate change.
What we get as a result is problems with both urgency and scale. This can create a willingness to invest massive public resources into use-oriented R&D and to take chances on unproven or highly uncertain technology—like atomic fission or mRNA vaccines. Technologies that struggle to get the same level of public or private investment in ordinary times can get traction in a crisis. Some of them can turn out to be game-changers, and unlock a wide range of commercial opportunities after the crisis ends.
You have looked at US innovation in World War II and in the COVID pandemic. What parallels and differences do you see?
Let’s break this down into two pieces: similarities and differences in how the innovation response was executed, and in its impact.
In both crises, speeding R&D and getting results into production and into the field was the priority. Cost was not a major obstacle. This also meant a focus on applied R&D rather than fundamental science. Urgency means we largely have to take the science as given and put it to use.
Both efforts also used a portfolio investment strategy—pursuing multiple candidate solutions for any given problem—when it was initially uncertain which ones would work. In the war, for example, we had parallel investment in five approaches to creating enough fissionable material for a bomb. Or investments in creating synthetic penicillin and scaling up the production of natural penicillin. In COVID, of course, Operation Warp Speed invested in multiple vaccine platforms and candidates early on.
In both cases we saw the investments in manufacturing capacity at risk—building manufacturing capacity before a technology was known to work, so that if proven, production could be ramped up immediately.
However, the differences between the WWII and COVID responses are striking. In the pandemic, there was no one agency that was working backwards from the problems to the research needed to solve them. COVID presented a wide range of questions around pharmacological treatments, rapid testing, contact tracing, mitigation technologies, and so on, but there really wasn't a coordinated, centralized attack on the full range of questions. The federal government was mainly focused on vaccines.
In World War II there was one organization—the Office of Scientific Research and Development (OSRD)—whose job was to identify military R&D problems, allocate resources, coordinate efforts, get the results of R&D into production, and get technology into the battlefield. You really had one agency running air traffic control on the full military R&D portfolio. In COVID, the lack of coordination had real impacts, illustrated by the large number of trials on just one therapy (hydroxychloroquine)—particularly problematic with a limited supply of sick patients for clinical trials, which means other drugs get boxed out. Some consequences are invisible, like the innovation that goes missing. Other consequences we can see, like vaccine hesitancy or confusion over how the disease spreads and how to stay safe.
On a more optimistic note, both the World War II and COVID R&D efforts did, or are likely to, open up a range of possibility for future technological advances. In the postwar period this ranged from electronics and computing, to safe production of nuclear energy, to the golden age of drug discovery. Post COVID, we see opportunity for mRNA approaches to vaccines, with high potential for improving human health: universal flu vaccines and a long-elusive malaria vaccine are already under development.
What lessons of COVID-19 would you want policymakers to understand for future innovation policy?
The first lesson I would emphasize is the value of a focused crisis R&D agency. It's a difficult role to play, because it's also political—you need to manage various constituencies, balance interests, win turf battles—we saw plenty of this in World War II as well. It takes deft leadership to be able to navigate this environment. And yet in World War II, the OSRD played a critical role in meeting the many and multifaceted needs of the Allied war effort, which were pivotal to winning the war. It’s hard to argue that we conquered the pandemic in the same way—Warp Speed was a tremendous success but I’d argue we could have done better.
Another point I would bring up to policymakers is to reinforce the value of investing in basic research. In a crisis, the pre-existing knowledge base is the material out of which we craft solutions to the new and urgent problem. For example, without 20-30 years of research on messenger RNA, we wouldn’t have the Moderna and Pfizer vaccines.
So there are two types of research governments invest in: basic or applied research. Explain those categories. What are the advantages of each type of research?
The basic vs. applied dichotomy is something of a canonical framework for R&D. The general view is that basic research—that is, undirected scientific study of natural phenomena—provides a knowledge base that gets used in (more problem-oriented) applied research and technology development. This framework, by the way, largely originated with OSRD’s director, Vannevar Bush, who used it to push a vision for postwar research policy.
In the physical sciences this might be theoretical or experimental investigations of light or energy vs. electrical engineering. In the biomedical sciences, it could be laboratory investigations of how certain cells bind to certain molecules vs. clinical trials of a drug. The underlying principle is that scientific research doesn't necessarily have to have a particular application in mind. But it produces the insights that make applications possible.
In the WWII and COVID crises, public investments were directed primarily at applied research, driven by urgency: we needed a solution, and we needed it as soon as possible. In ordinary times, R&D policy is primarily science policy, focused on funding basic research, whether it's through the National Science Foundation, or through the NIH.
It’s not that either category (basic vs. applied) is better or worse, but rather they operate symbiotically. And of course, the boundaries between what is “basic” and what is “applied” can be fuzzy, but this is broadly thought to be a useful framework for organizing our thinking.
The key takeaway, really, is that crises illustrate just how valuable basic research can be—even more than we used to believe, because it doesn’t just produce the scientific advances supporting regular technological progress, it also plants the foundation for crisis innovation.
How can the private sector take advantage of the opportunities that crises create?
One way to view the question is as asking “how can firms harness the commercial opportunities for new technology created in a crisis which may have continuing value after the crisis ends?”
One of our recent working papers begins to get at this question, exploring how crisis R&D projects and more general government R&D programs create platforms for new technology-based firms and industries.
In that paper, we specifically dove into the World War II radar project, which developed the field of microwave engineering and was arguably the most important technology developed during the war, both to its resolution and to the postwar era (it is often said that radar won the war; the atomic bomb merely ended it). It also played a key role in the development of modern electronics and communications.
The radar program—run primarily out of a newly-created large, government-funded central laboratory (the “Rad Lab”) and operating in collaboration with the military and manufacturers—laid the groundwork for a postwar industry to blossom. The cornerstones included new technical knowledge which was widely disseminated after the war, new research talent which fanned out across the country, a domestic manufacturing base, and an anchor (military) customer. These assets became critical resources for firms competing in this space when a commercial market for radar developed in the late 1940s, and in adjacent industries. For example, many firms scooped up Rad Lab alumni when it demobilized at the end of the war.
Why would a firm choose to participate in a public crisis R&D project to begin with? One reason might be to secure a better competitive position when the crisis ends—for example, they might create or acquire some assets that have durable value, even after the crisis recedes. It could be intellectual property like patents. It could be internal capabilities and know-how. It could also be establishing customer relationships, especially if you expect the government to continue to be a major customer going forward—in the case of COVID, by buying more vaccines for boosters.
But that’s only the beginning. Our collective knowledge base on mRNA is now dramatically larger than it was three years ago, and the population of scientists trained in mRNA techniques is too. And that’s a resource that opens the door to further entry into the fields and industries that are carried forward by the crisis innovation investment.
More generally, there is an insight here into the value of government R&D and government demand in getting new, complex and often-science based technologies off the ground—the category that is these days often referred to as “hard tech” or “deep tech”. Let’s take another example, integrated circuits in the Space Race. The first integrated circuit (IC) was invented in 1958, and it turns out that for the first half of the 1960s, the U.S. military and NASA were buying up nearly the entire domestic IC output. This was at a time when unit costs were very high—too high to be commercially viable—but so was the government’s willingness to pay. With this early demand, chipmakers began to work their way down the learning curve, and within a few years ICs were being produced at low enough cost that they could be sold profitably at commercially-viable prices.
The combination of government investment in industry building blocks and guaranteed demand with a high willingness to pay in a crisis help get these industries off the ground. And when the crisis ends, there's now a platform off of which commercial activity can spring.