Innovation in a crisis: Why it is more critical than ever

Innovation in a crisis: Why it is more critical than ever

A recent analysis of scientific articles and patents from previous decades suggests that major discoveries in science are becoming less common. What has caused this shift and what is the situation in Germany?

Throughout the past 120 years, there have been numerous groundbreaking discoveries that have had a significant impact on the world, such as the first antibiotic, the invention of the Internet, and the development of the first chemotherapy. However, a study published in the journal Nature indicates that such groundbreaking innovations are now occurring less frequently.

The study reveals a noteworthy decrease in the innovative capacity of science. This trend, which had been previously suggested, is now confirmed by a comprehensive analysis of 45 million scientific articles and 3.9 million patents across all scientific fields. The study concludes that the proportion of groundbreaking discoveries has substantially declined from 1945 to 2010, despite a significant increase in overall research. According to science historian Helmuth Trischler, this suggests that a higher volume of research does not necessarily lead to more breakthrough innovations.

An innovation score was utilized by the research team to whether evaluate the examined work was confirming previous theories or introducing new ones. The study found that the innovation score for scientific articles had decreased by 90 percent between 1945 and 2010, and for patents, it had fallen by 78 percent since 1980. The study’s authors highlighted the shift in language usage, noting that phrases such as “produce” are being replaced with “improving,” and that researchers are repeating themselves.

The study authors stated in a press release that they are observing a decreased tendency in scientific articles and patents to break away from established practices and steer science and technology in new directions. However, there is also a significant need for innovation, particularly in fields such as climate change and space research.

Marco Tamborini, a science historian and philosopher at TU Darmstadt, commented that research teams often lack boldness. He emphasized that young scientists are under pressure to publish frequently and are compelled to structure their research questions around funding applications, leading to increasingly narrow research inquiries. Tamborini stressed the importance of a broader perspective for significant innovations and highlighted the need for young researchers to have more time for creativity. The study authors also advocate for evaluating the quality rather than the quantity of papers.

The decline in innovation in all scientific fields contradicts the speculation that many easily attainable innovations have already been discovered. Another factor contributing to this decline may be the review process, as it could impede innovation by pitting new ideas against established research within the same field. Science historian Trischler pointed out that, on average, a research paper is reviewed by only three scientists before publication.

The study suggests that smaller research teams are more likely to produce innovative work. While larger research groups are involved in more specialized work, smaller teams have been responsible for many groundbreaking discoveries. However, Tamborini argues that larger research teams stand a better chance of securing significant funding.

The future of the economy depends on significant innovations, raising concerns about the decline in groundbreaking research. However, Trischler cautioned against overemphasizing the study’s findings, pointing out methodological weaknesses in the meta-analysis. He clarified that while the proportion of groundbreaking discoveries is decreasing due to a higher volume of publications, the absolute number of such discoveries has remained constant. Additionally, the calculation of the innovation score has sparked controversy.

Trischler believes that the current situation reflects a phase shaped by past actions. He noted the existing potential for innovation, particularly in biotechnology, and highlighted the need to accelerate the translation of research into real-world applications.

Trischler is optimistic about Germany’s innovation capacity, stating that despite having only around one percent of the world’s population, Germany accounts for approximately ten percent of global innovation activity, with some areas reaching as high as 20 percent. For instance, eleven percent of the world’s patents related to hydrogen technologies originate from Germany, making it the leader in the EU.

While the study’s analysis only extends to 2010, Trischler acknowledges that there have been significant breakthroughs since then, such as the first direct detection of gravitational waves and the development of mRNA vaccinations. Despite these achievements, he expressed concerns about the conditions for innovation in Germany.

The slowdown of the U.S. innovation ecosystem is indicated by data showing that productivity growth in the United States, which relies on innovation, has been slowing down. Total factor productivity saw significant growth in the mid-20th century, but began to slow down in 1970. This reduced growth continues to this day, with productivity levels lower than they were over 100 years ago, despite increased investment in scientific research.

The reason behind this is that, since the 1970s, the U.S. innovation ecosystem has fragmented, causing a separation between corporate and academic science and making it more challenging to apply basic scientific discoveries. If we want to observe greater productivity growth, we need to consider alternative methods for translating science into invention.

Departure of large corporations from scientific endeavors

Before the 1970s, some major American companies invested heavily in scientific research to the point that their corporate science matched or exceeded university research in quality. DuPont’s central R&D unit, a pioneer in polymer chemistry, published more articles in the Journal of the American Chemical Society than MIT and Caltech combined in the 1960s. AT&T’s Bell Labs, renowned for the transistor and information theory, had 14 Nobel Prize winners and five Turing Award recipients among its alumni.

By the 1980s, a mix of pressure from shareholders, increased competition, and public setbacks caused corporations to reduce their investments in science. Instead, companies began to rely on universities and small startups for fresh ideas. Bell Labs was separated from its parent company, AT&T, and placed under Lucent in 1996. Xerox PARC was spun off into a separate company in 2002. Under Louis Gerstner, IBM shifted its research focus towards more commercial applications in the mid-90s. DuPont shut down its Central Research & Development Lab in 2016.

The decline in corporate investment in science is evident from the accompanying chart, showing that the share of research (both basic and applied) in total business R&D in the U.S. dropped from approximately 30% in 1985 to less than 20% in 2015. While the amount of basic research stagnated over the 20-year period between 1990 and 2010, total industry spending and patenting on development activities steadily increased.

This decline in research is also reflected in data on scientific publications. A study involving 4,608 American listed firms engaged in R&D found that the number of publications per firm decreased at a rate of 20% per decade from 1980 to 2006. This trend is also evident in data on scientific awards: Another study found that Fortune 500 firms won 41% of innovation awards in 1971, but only 6% in 2006.

This marks a significant shift in the U.S. innovation ecosystem. We’ve transitioned from an economy where major corporations conducted both scientific research and development to one characterized by a clearer division of labor, with companies focusing on development and universities concentrating on research.

In contrast to the corporate sector, universities have continued to expand their research, a trend that gained momentum after World War II. Data from the National Science Foundation (NSF) reveals that university spending on research grew more than fourfold, from $15 billion to $62 billion between 1980 and 2015. Moreover, peer-reviewed scientific publications increased by 37% between 2003 and 2016 for universities, while those for firms declined by 12%.

While specialization allows universities and corporations to become more proficient at producing research and developing products respectively, this division of innovative labor has made it more difficult for innovative research to translate into practical products. University science differs in nature from corporate science. Companies have access to specialized resources that universities find challenging to replicate. For instance, Bell’s invention of the Holmdel Horn Antenna, Google’s creation of the Tensor Processing Unit (TPU), and Pfizer’s use of High Throughput Screening (HTS) processes are achievements that universities or small firms would have found difficult to accomplish. These inventions demanded both scale and scope, which the individual investigator model in universities would struggle to reproduce.

Additionally, corporate and university researchers are driven by different incentives, which can impact the ease with which their research is transformed. University researchers are rewarded for priority (“who discovers it first”), while corporate researchers are rewarded for the usefulness of their inventions (“does it work”). As a result, university research is more likely to be original but less likely to function as intended by businesses. Inventors seem to be aware of this issue, as a recent study found that a discovery published by a university research team is 23% less likely to be cited in patents than the same discovery published by a corporate research team.

Even if universities generate relevant and applicable knowledge, it may be challenging for companies to locate it. As companies decrease investment in fundamental research, they have fewer researchers engaging with the broader academic community. This suggests that companies will increasingly struggle to access relevant research and apply it.

Venture capital (VC) and startups have served as solutions for connecting university research with commercial applications. Emerging technologies in various fields, including molecular biology, biochemistry, integrated circuits, and personal computing, were initially overlooked by large companies until VC-backed startups developed successful products, such as microprocessors (Intel), synthetic insulin (Genentech), and the Macintosh (Apple).

VC-backed startups bridge the gap between university science and corporate innovation. VC managers often have advanced degrees in the areas they invest in and gain commercialization experience over time. Startups also do not adhere to established business models that make traditional firms resistant to new ideas. For example, although Xerox PARC’s research was successfully commercialized by Xerox when it aligned with the company’s core business (e.g., laser printers), it failed to gain traction when it diverged from it (e.g., GUI, ethernet).

However, VC has not been a perfect solution, as investments have mainly focused on the life sciences and digital innovation. Data from PwC Moneytree shows that information and communication technologies (ICT) and life-science startups received approximately 83% of all VC investments between 1995 and 2019. This limited funding hinders innovation in other critical sectors for advanced economies, such as energy, which relies on material sciences and nanotechnology advancements for more efficient grids and safer power generation.

The challenges of translating scientific research beyond ICT and the life sciences

Why do entrepreneurs in science-based fields struggle to secure funding outside of these two sectors? We believe this is due to both technical and commercial uncertainties.

Technical uncertainty revolves around whether a technical problem can be solved using a proposed approach. Commercial risk pertains to accurately evaluating the demand for a proposed product and the likely costs of scaling up and serving the market. Most software projects face minimal technical risks, focusing instead on what should be done rather than how. Commercial risks are managed by setting commercial milestones (e.g., number of users or cost of customer acquisition). When a startup achieves a milestone, it receives additional investment to progress toward the next one.

The life sciences entail significant technical uncertainty but low market uncertainty, given the consistent need for new medical treatments and devices. Projects in this field can be mapped to specific milestones and managed accordingly. Success in reaching a milestone allows investors to estimate the increase in value based on the potential market size.

Balancing both types of risks may be challenging, explaining why physical sciences startups receive limited private-sector funding. The energy sector serves as an example: Thermionic energy generation, a method directly converting heat to electricity, holds the promise of significant improvements in comparison to mechanical heat engines. Despite its exploration for powering satellites in the 1960s, the technology was overlooked by investors until recently due to technical challenges – the lack of easily available microfabrication tools required for prototypes.

Innovations in the energy sector face notable market risks, as adoption often requires changes in existing technical infrastructure, consumer behavior, and government regulation. Advancements in grid-energy storage technologies, crucial for wind and solar clean-energy innovations, depend on downstream market demand. VC funding in battery technology startups started gaining traction only in the 2000s, after the automotive sector began embracing hybrid and fully electric vehicles.

Large corporations can often better manage these commercial and technical uncertainties than startups due to their experience in bringing products from labs to markets. They, or their partners, can also provide a source of demand and effectively coordinate changes across the entire value chain while handling regulatory challenges more adeptly.

Where can we proceed from this point onward?

How can we connect the realms of science and application in overlooked sectors? One option involves the public sector providing funding for promising startups in the physical sciences, allowing them to focus on solving technical issues, with the hope that private capital will later step in to address commercial challenges. For example, the SBIR grant program at the Department of Energy supports renewable energy startups, and has been shown to increase patenting, revenue, and successful exit rates by alleviating financial constraints.

More funding would help to address technical uncertainties by enabling the development of capital-intensive prototypes over extended development cycles, and validating performance for potential investors. Organizations with specific missions, such as DARPA and ARPA-E, continue to support fundamental innovations, including those that have led to the creation of the Internet, automated voice recognition, language translation, and Global Positioning System receivers.

Another approach is to nurture scientific entrepreneurial talent. Initiatives such as Cyclotron Road at Berkeley Lab and Runway at Cornell Tech offer research fellowships for post-doctoral scientists and engineers, focusing on the transition from discovery to application. These fellows leverage national lab and academic research infrastructure to advance technologies based on their research and explore commercial viability.

Similar programs at research institutions in the U.S., Canada, and Germany are following suit. The mentorship and equipment that scientists receive in these programs enables them to better address commercial uncertainty by becoming more attuned to consumer needs and developing viable business models.

Although it may be tempting, there is little value in longing for the past, whether it was a golden age or not. The new innovation ecosystem holds great promise. What we need is a more effective way to leverage today’s scientific advancements and technical breakthroughs to accelerate productivity growth.

The pace of scientific disruption and technological invention has significantly slowed down, according to scientists.

A new study cautions that the substantial amount of new scientific and technological knowledge does not appear to be contributing to new groundbreaking discoveries and innovations.

Researchers set out to examine whether the technological process was accelerating or decelerating. To do so, they analyzed 45 million scientific papers and 3.9 million patents spanning six decades. They utilized the CD index, a tool that measures the impact of papers and patents on citations in science and technology, to produce a “disruptiveness score” indicating the extent to which a paper has influenced science and technology.

Their objective was to utilize the extensive body of scientific research to analyze two different types of breakthroughs: those that build upon existing knowledge and reinforce our current understanding, and those that are genuinely disruptive and render old knowledge obsolete, offering researchers new avenues to explore.

The authors discovered that new research is becoming less disruptive and aligning more closely with the existing status quo. This trend is evident across all disciplines, particularly in the social sciences and technology.

The reasons for this shift are not definitively clear. However, the authors suggest that scientists and inventors are increasingly focusing on narrower fields for their new discoveries.

“A healthy scientific ecosystem involves a blend of disruptive discoveries and consolidating improvements, but the nature of research is evolving,” stated Russell Funk, a researcher at the University of Minnesota and the lead author of the study. “With incremental innovations becoming more prevalent, it may take longer to achieve those crucial breakthroughs that significantly propel science forward.”

A study detailing these findings, titled “Papers and patents are becoming less disruptive over time,” is being published in the journal Nature today.

Science is the driving force behind our society, and the decline in truly disruptive research is a warning sign for all of us

If we gauge it by the sheer volume of published papers, we are currently in a golden age of science. There are more scientists and publications than ever before, and while much valuable work remains underfunded, there is now more funding available than ever. Federal funding for research and development has increased from $3.5 billion in 1955 to $137.8 billion in 2020, a rise of over tenfold even after adjusting for inflation.

Fields such as AI and biotechnology appear to be flourishing, but outside of a few specific areas, it does not feel as though we are truly in a golden age of science. In the early 20th century, there were successive discoveries that substantially transformed our understanding of the world and revolutionized various industries: nitrogen fixation, enabling the feeding of billions; the structure of the atom and DNA; rocketry, plate tectonics, radio, computing, antibiotics, general relativity, nuclear chain reactions, quantum mechanics … the list goes on and on.

The 20th century seems to have had more groundbreaking discoveries that changed the world compared to the present, despite the increase in scientific research. This is the main idea explored in a new Nature paper titled “Papers and patents are becoming less disruptive over time,” which aims to systematically investigate the notion that there is more science but less world-altering science.

The study in Nature examines patents and papers to gauge how much future research builds on a specific publication or how much a particular piece of work contributes to “pushing science and technology in new directions.”

The findings indicate that there indeed appears to be a decrease in radical innovations compared to the past.

The question arises: are we becoming less proficient at producing transformative scientific advancements?

This is not a new question, as previous studies have also highlighted declining research productivity in various fields. The Nature researchers sought to assess this more comprehensively by analyzing 25 million papers (1945–2010) and 3.9 million patents (1976–2010) using a new metric termed the “CD index,” which assesses whether papers predominantly “consolidate” knowledge in the field or “disrupt” the field and point towards new research paths.

The study suggests that there is a decline in “disrupting” papers that change the field and indicate new research directions. This decline is remarkably significant.

One possible explanation is that all the most disruptive ideas have already been discovered. At the turn of the 20th century, there was significant basic work that remained unexplored. Moreover, important discoveries tend to be made by older researchers and larger teams nowadays, possibly due to the increased time and effort required to grasp all the necessary knowledge to advance in a field.

However, this explanation seems somewhat circular. Is science slowing down because we have already discovered all the transformative and crucial things, or do we believe that we have discovered all the transformative and crucial things because scientists are not finding any new ones?

There’s a possibility that the slowdown in science is not an inevitable natural consequence but a result of policy decisions. The process of allocating scientific grants, for instance, is flawed. Despite the ample funding available, innovative individuals with transformative ideas often struggle to secure grant funding. Additionally, obtaining funding entails navigating an increasing number of bureaucratic hurdles, with many leading scientists spending significant time writing grant proposals rather than conducting actual research.

For instance, Kaitlyn Suski, a postdoctoral researcher in chemistry and atmospheric science at Colorado State University, expressed in a 2016 Vox survey of scientists that the pressure to publish to maintain employment and satisfy funding agencies has led to numerous mediocre scientific papers with minimal new scientific content.

Claiming that scientific progress is slowing down because our predecessors have already exhausted all the good ideas might prevent us from recognizing that the deceleration of science could be due to our mismanagement of it. This mismanagement might lead researchers away from the most important research and toward producing small, incremental papers to satisfy funders and tenure review committees.

The decline of science has significant societal implications that affect a wide range of areas. Groundbreaking papers often result in new innovations that boost productivity, enhance quality of life, increase wages, and save lives. Some have suggested that the slowdown in scientific innovation is a major factor contributing to the leveling off of productivity and wages in the US.

In reality, the decrease in groundbreaking papers is likely the result of numerous factors, some of which are within our control and some that are not. Nonetheless, a recent paper in Nature makes it clear that the effects are substantial. Given that science is the driving force behind productivity and prosperity, understanding the reasons for its diminished performance is of utmost importance.

It is a common assumption that the pace of innovation is increasing. We frequently hear about new breakthroughs in artificial intelligence (AI), materials, genetic treatments, or high-speed aircraft. Indeed, the widespread adoption of new technologies appears to have accelerated, with phenomena like smartphones and apps such as Instagram reaching one billion users much more rapidly than in the past.

However, this conventional wisdom may be misleading. Innovation, in many respects, is actually slowing down. For example, research by Benjamin Jones has revealed that the amount of time required to make significant new discoveries in science has lengthened, as has the number of collaborators needed for groundbreaking scientific projects. Consequently, the average age of Nobel Prize winners has steadily risen.

This slowdown may be pervasive. For instance, sectors such as aerospace still rely on technologies from half a century ago, while most cars continue to employ the internal combustion engine, which was invented in the 19th century. Last year, we published Nesta’s report on the biomedical bubble, which included the diagram below summarizing the evidence that, over the past 50 years, biomedical research and development (R&D) productivity has consistently declined, despite significant increases in both public and private investment. New drugs now cost much more than they used to.

More recently, a study by Nick Bloom and colleagues indicated that research productivity has been declining across both the private and public sectors. The study revealed that, despite a rise in the number of researchers and an increase in public investment in R&D, productivity growth has stagnated. The researchers found that ‘more than 20 times as many researchers are currently needed to generate roughly the same amount of productivity growth as 80 years ago.’ Furthermore, they demonstrated that this trend of declining research productivity applies to various fields, including computer chips, crop yields, and medical innovations.

Some may argue that these declining rates of return justify reducing public investment in R&D and redirecting the limited funding toward other priorities. Conversely, others may assert that, now more than ever, we need to boost R&D investments to reverse the productivity slowdown.

Whether you lean more toward one viewpoint or the other, a crucial question that needs to be tackled is how to enhance the returns on R&D. This is a realm that should be guided by facts and evidence, rather than intuition or self-interest, and it is therefore imperative that we confront it head-on.

So, what are the options? What could reverse this relentless decline in productivity? We propose three primary areas of action that should be explored by those responsible for managing R&D budgets at public funding agencies and large companies.

Leverage technology

The first step involves harnessing technologies with the potential to expedite innovation. Machine learning, in particular, could be a game-changer. According to a recent paper from the National Bureau of Economic Research (NBER), machine learning could represent the “invention of a method of invention.” If used effectively, it could significantly hasten the development of new materials or drugs. However, this will hinge on ‘having access not just to the underlying algorithms but also to large, granular datasets on physical and social behavior.’

An analogy can be drawn to double-cross hybridization, which enabled the development of a wide array of new corn varieties tailored to different conditions. The notion is that employing AI machine learning in tasks ranging from protein folding to drug discovery could accelerate the pace of R&D and alleviate the productivity issue. However, it remains uncertain whether this will be the case, and this can only be ascertained through thorough and systematic experimentation.

Adopt new R&D methods

The second step entails breaking free from the conventional approaches that largely govern R&D management worldwide. Many organizations still apply methods that have seen little innovation over the past half century. We recently released a Compendium of Innovation Methods, encompassing a range of approaches from challenge prizes to crowdfunding, designed to propel R&D into the future of creativity and productivity.

Instill a culture of experimentation

The third proposed action calls for organizations to promote a culture of experimentation. In recognition of the uncertainty surrounding the most effective paths to increased R&D returns, fostering a mindset and organizational structure that values and supports experimentation is crucial. Embracing the principles of lean startup and agile methodologies can facilitate swift experimentation and adaptation, ultimately leading to better outcomes in R&D.

Saying that the inevitable science slowdown is due to our predecessors having already exhausted all the great ideas might prevent us from considering that science is slowing down because we are mismanaging it. We are redirecting researchers away from the most crucial research and toward small incremental papers to keep funders and tenure review committees satisfied.

The decline in science has significant and far-reaching societal implications. Disruptive papers often lead to new innovations that increase productivity, enhance quality of life, raise wages, and save lives. Speculation suggests that much of the lack of productivity and wage growth in the US is driven by the slowing pace of scientific innovation.

In reality, the decrease in innovative papers is likely due to a variety of factors, some within our control and some beyond our control. Nevertheless, the effects are substantial, as indicated by the new Nature paper. Given that science is the engine of productivity and prosperity, understanding why it is not performing as well as before is crucial.

It is commonly assumed that the pace of innovation is accelerating. Every week, we hear about new advancements in artificial intelligence (AI), materials, genetic treatments, or new types of high-speed aircraft. The rapid spread of new technologies certainly appears to have accelerated, with smartphones and apps like Instagram reaching their first billion users much faster than in the past.

However, this common belief may be misleading. Innovation is slowing down in many respects. For instance, research by Benjamin Jones indicates that the time required to make significant new discoveries in science has increased, along with the number of people needed to collaborate on breakthrough science projects. Consequently, the average age of Nobel Prize winners has steadily risen.

The slowdown may be systemic. Sectors like aerospace still rely on technologies that are half a century old, while most cars continue to use the internal combustion engine, invented in the 19th century. Last year, Nesta’s report on the biomedical bubble, which included the diagram below summarizing the evidence, indicated that over the past 50 years, biomedical R&D productivity has consistently declined, despite significant increases in public and private investment. New drugs now cost significantly more than they used to.

More recently, a study by Nick Bloom and others suggested that research productivity has been declining across the private and public sectors. The study revealed that as the number of researchers has increased and public investment in R&D has grown, productivity growth has stagnated. They found that ‘more than 20 times as many researchers are needed today to generate about the same amount of productivity growth as 80 years ago.’ The study also demonstrated that this trend of declining research productivity applies to many different fields, including computer chips, crop yields, or medical innovations.

Some may view these declining rates of return as a reason to reduce public investment in R&D and allocate limited funding to other priorities. Conversely, others may argue that now, more than ever, we need to increase R&D investments to reverse the productivity slowdown.

Regardless of which perspective one is more sympathetic to, a key question we should address is what can be done to increase the returns to R&D. This is a domain that should be guided by facts and evidence, not intuition or interests, so it is important to confront it directly.

So, what are the options? What could reverse this relentless decline in productivity? We propose three main areas of action that should be explored by those managing budgets in R&D at public funding agencies and large companies.

Harness technology

The first step is to utilize technologies with the potential to accelerate innovation. Particularly, machine learning may prove to be a game-changer. According to a recent NBER paper, machine learning could represent the “invention of a method of invention.” When used appropriately, it could greatly expedite the development of new materials or drugs, although this will depend on having access to not only the underlying algorithms but also large, detailed datasets on physical and social behavior in each case.

An analogy can be drawn to double-cross hybridization, which facilitated the creation of a wide range of new corn varieties optimized for different conditions. The hypothesis is that the application of AI machine learning to everything from protein folding to drug discovery could accelerate the pace of R&D and address the productivity issue. However, at present, we do not know whether this will be the case. Only vigorous and rigorous experimentation will provide the answer.

Innovation in research and development (R&D) should incorporate new methods.

To revamp R&D management, it is crucial to break away from the traditional approaches that continue to dominate the field. Many organizations still rely on methods that have remained largely unchanged for the past fifty years. Nesta recently released a compilation of diverse innovation methods, which include challenge prizes and crowdfunding. Although some of these methods are already in use in certain areas and have been applied by Nesta, it is surprising to note the limited adoption of these methods by major funders, and the lack of experimentation with newer approaches like randomised funding.

Another noteworthy observation is the infrequent application of the scientific method by funders on themselves – such as conducting experiments to test hypotheses, employing control groups, and gathering essential data. There are a few exceptions, such as the National Science Foundation in the US, but most organizations tend to stick with familiar methods.

At the very least, organizations should consider diversifying their research funding approaches and implement sound experimental research designs to determine the most effective methods for different stages of the R&D process across various industries. The Innovation Growth Lab plans to venture into this sphere by leveraging its success in utilizing experimental methods in business support and innovation policy and extending them to scientific research.

Enhancing the availability of valuable data to inform decision-making is another crucial step.

For R&D to improve, there should be a greater focus on learning. It is imperative to open, organize, and link data on research activities, along with funding information detailing what is being funded, the funding mechanisms, and the achieved outcomes. Our work in Innovation Mapping is beginning to demonstrate how this can be achieved through advanced tools for tracking real-time innovation.

With the advent of new technologies and search engines, such as our upcoming Clio and Rhodonite tools, it will be possible to create sophisticated real-time maps of technology evolution. Making this type of data open, easy to integrate, and analyze will be vital in making R&D more introspective, self-critical, and better equipped to test new solutions.

The slowdown in R&D productivity has not yet become a priority on any political agenda

Government officials and agency leaders barely mention it in their speeches. Apart from a few exceptions, agencies and funders have not yet engaged in substantial discussions about potential solutions. However, this is a problem that is likely to gain more attention in the coming years, primarily due to the glaring patterns.

The primary question to address is whether the key stakeholders will wait for it to become a crisis, or if they will have the foresight and wisdom to take preventive action to resolve their own challenges.

This new article falls within a realm that could be termed “innovation studies”—the examination of invention itself. I’ve always been uncertain about my stance on these pursuits. There’s a considerable amount of history to investigate, and in more recent years, a wealth of actual data (like patent filings and citations in the academic literature, etc.), and to quote Pete Townshend, “From all this you’d imagine / that there must be something learned.” Yet, as you’ll discover, I still question how readily that learning occurs.

This new article serves as a prime illustration of the field. The researchers are reviewing the last sixty years of literature and patents (approximately 25 million papers and almost 4 million patents) and introducing a new metric, the “CD Index”:

The basic idea is that if a paper or patent is disruptive, the subsequent work that references it is less likely to cite earlier works as well; for future researchers, the ideas that contributed to its creation are less pertinent. Conversely, if a paper or patent is consolidating, subsequent work that cites it is more inclined to also reference its predecessors; for future researchers, the foundational knowledge of this work remains relevant, or perhaps even more so.

At first glance, that seems like a reasonable concept, and it turns out that when you graph the average CD scores for both papers and patents since World War II, every discipline shows a notable decrease in this “disruptiveness” score over time. Interestingly, the patent graph is much tighter and less variable compared to the open-literature graph when examining different fields. Another intriguing metric arises from observing the fraction of new word pairs in the data relative to total word pairs over time.

The same fields (life sciences and biomedicine, physical sciences, social sciences, and a broad “technology” category) exhibit quite similar declines by this metric, and notably, all the graphs present a biphasic pattern, showing steep declines until around 1970 and then significantly slower declines afterwards. The authors do note that the number of “most disruptive” papers and patents has remained fairly steady over time. The substantial overall increase in paper and patent production allows this to align with the lower overall measures of disruptiveness.

You may already have thought of several objections and potential confounding factors, as I certainly did. To their credit, the authors have attempted to address many of these concerns. The database from which the above conclusions were drawn is the Web of Science collection, and to ensure that this isn’t an artifact of WoS indexing, they applied the same methodologies to the JSTOR database, the American Physical Society collection, and PubMed, yielding similar outcomes.

There’s also the genuine possibility that authorship and citation standards have evolved over time, leading to shifting baselines for these measurements. The authors have devised variations of the CD Index that account for the rising citation counts found in papers over time, but they report that these produced comparable results. Similarly, assessing the CD numbers concerning various other indicator variables (such as the number of new papers, average number of authors, number of papers cited, etc.) also revealed declining disruptiveness.

There’s also the “low-hanging fruit” argument—the notion that fundamental breakthroughs in any field can only occur once, naturally becoming rarer over time. I find it challenging to dismiss this, yet the paper indicates that the CD decline appears strikingly similar across diverse fields, which can’t all share the same low-hanging fruit (at least not to the same extent). In my view, while this last objection is valid, the phenomenon will likely apply to any field making measurable progress, albeit to varying degrees: “they should appear more different than they do if low-hanging fruit is an explanation” doesn’t, in my opinion, negate the idea that it’s probably part of the explanation.

Another perspective on the growth of knowledge overall can be considered in this context

The authors point out that this goes both ways: as time progresses, there’s both an increase in available knowledge to build upon and more knowledge to master. The former seems beneficial for innovation, whereas the latter may hinder it. Interestingly, the authors suggest that their analysis indicates that, for open-literature papers, the positive impact of increased knowledge is somewhat more evident, while for patents, the growth of knowledge in a specific field appears to have a more inhibitive effect on disruptiveness.

“Given these conflicting findings,” the authors explored not just the growing knowledge but also its application. They found that the diversity of cited work has been decreasing (as researchers focus more narrowly within citations), along with an uptick in citations to the top 1% most frequently cited papers or patents. The average age of cited works has been increasing, and similarly, there’s a rise in self-citation. All these trends seem to indicate that scientists and inventors are increasingly relying on familiar knowledge.

I can only speculate, but I believe these findings are likely authentic and probably the result of multiple factors interacting simultaneously. For one reason, I’m not prepared to dismiss the low-hanging-fruit explanation as part of the narrative. Another element, I think, is the growth of research as a profession over the past 60 to 80 years. Additionally, there may be misaligned incentives within many of these research careers—the process of applying for grants, having projects approved, writing papers, seeking promotions, and navigating departmental politics does not necessarily reward innovative work in an ideal manner. The saying goes, “the nail that sticks up gets hammered down,” and these declines in the CD index might be just what we would expect as a result.

There’s also a historical aspect that the authors may not have taken into account. All of the figures presented in this paper cover a timeframe primarily since World War II, which can be considered a starting point for the modern scientific era. However, it was also very close to a pivotal moment: the war acted as a technological turning point like no other, as well as a junction for attitudes toward and funding for research and technology.

Innovations advanced during the war in ways that had never been seen in previous conflicts, with inventions and applications rapidly building upon each other to secure military advantages. Just to name a few off the top of my head, these included radar, proximity fuses, code-making and code-breaking, piston engine advancements followed by jet engines, drones, ballistic rockets, and of course, the atomic bomb. The evolution of aircraft from 1939 to 1945 serves as a striking illustration of this general effect.

The insight gained by industrialized countries was that advancements in science and technology were vital for national survival, and the eventual Cold War only reinforced this with developments in space exploration, satellite and electronic reconnaissance, intercontinental ballistic missiles, computing technologies, and much more. The scientific landscape we inhabit today was shaped by the events between 1939 and 1945, and any measurements of scientific activity that began during this significant shift will inevitably be influenced by it.

The pace of revolutionary scientific discoveries and technological advancements is diminishing, despite an increasing volume of knowledge, according to an analysis published Wednesday that examined millions of research papers and patents.

While earlier studies have identified declines in specific fields, this investigation is the first to “clearly and convincingly document this reduction in disruptiveness across all principal areas of science and technology,” stated lead author Michael Park in an interview with AFP.

Park, a doctoral student at the Carlson School of Management at the University of Minnesota, described disruptive discoveries as those that “diverge from established concepts” and “drive the entire scientific discipline into unexplored territory.”

The researchers assigned a “disruptiveness score” to 45 million scientific articles published between 1945 and 2010, as well as to 3.9 million US-based patents from 1976 to 2010.

From the beginning of these periods, research papers and patents have increasingly leaned towards consolidating or building on prior knowledge, as indicated by the findings published in the journal Nature.

The scoring was based on how often the articles were cited in other research five years post-publication, operating under the assumption that the more disruptive the research, the less frequently prior works would be referenced.

The most significant drop in disruptive research occurred in the physical sciences, such as physics and chemistry.

According to senior study author Russell Funk, “the nature of research is evolving” as incremental innovations gain prevalence.

One suggestion for the decline is that the “low-hanging fruit” in science has already been harvested.

If that were accurate, the decrease in disruptiveness across various scientific domains would occur at differing rates, Park noted.

However, “the declines are quite consistent in their timing and speed across all major fields,” Park stated, suggesting that the low-hanging fruit theory is likely not the cause.

Instead, the researchers highlighted a concept referred to as “the burden of research,” which posits that the sheer volume of knowledge required to master a field leaves scientists with little opportunity to explore new frontiers.

This results in scientists and inventors “concentrating on a narrow segment of existing knowledge, which prompts them to produce more consolidating work rather than disruptive innovations,” Park explained.

Another contributing factor could be the “growing pressure in academia to publish extensively, as that is the benchmark for academic evaluation,” he added.

The researchers urged universities and funding organizations to prioritize quality over quantity and to consider providing full funding for year-long sabbaticals to enable scholars to read and think more extensively.

“We are not becoming any less innovative as a species,” Park reiterated, citing recent advancements such as the application of mRNA technology in COVID-19 vaccines and the detection of gravitational waves in 2015.

Jerome Lamy, a historian and sociologist of science at France’s CNRS research agency, who did not participate in the study, remarked that it highlighted the increasing “ultra-specialization” and publishing pressures over the years.

He attributed the global trend of academics feeling “compelled to break down their papers” to boost publication numbers, resulting in a “dulling of research.”

In 2005, well before the emergence of Apple’s Siri and Amazon’s Alexa, two startups—ScanSoft and Nuance Communications—merged to capitalize on a growing opportunity in speech recognition. The new firm developed advanced speech-processing technology and experienced rapid growth for nearly a decade—averaging a 27% annual increase in sales. Then, suddenly around 2014, it ceased expanding. By 2019, revenues had stagnated, remaining roughly the same as in 2013. Nuance faced significant challenges as major tech companies that were once collaborators turned into competitors.

Nuance’s situation is far from exceptional. Across all primary sectors and technology areas, startups are encountering unparalleled challenges. New companies continue to emerge to seize innovative prospects. Additionally, these firms now have access to an unprecedented influx of venture capital. Yet, the startup ecosystem is not in a healthy state. Innovative startups are now growing at a significantly slower rate compared to similar companies in the past.

Unexpectedly, technology is a significant factor—specifically, proprietary information technology held by large companies that control their sectors. We often associate technology with disruption, where innovations from smaller, newer enterprises allow them to expand and eventually replace older, less efficient ones. However, these proprietary technologies are now hindering industrial turnover, which has sharply decreased over the past twenty years. This reduction in dynamism has extensive adverse effects on the US economy. It has slowed the expansion of innovative companies, and researchers have linked this slower growth to a substantial decline in productivity growth, impacting the entire economy, including personal incomes.

Nuance was established in 1994 as a spinoff from SRI, a Stanford lab that created speech-recognition technology for the US government. ScanSoft originated as a spinoff from Xerox. Before their merger in 2005, speech recognition was limited by computer processing power, with systems able to understand only restricted vocabularies, yet they still found utility in specific commercial areas, such as customer support via telephone and the transcription of medical records.

By the late 2000s, circumstances had shifted. As computing power increased, Nuance succeeded in creating a significant advancement: “large vocabulary continuous speech recognition.” This innovation allowed users to speak freely about any subject, and the technology could accurately transcribe it in real time. Nuance utilized this technology in Dragon Dictation, an app that Apple highlighted when it launched the iPhone 3GS at its 2009 Worldwide Developers Conference. After Apple endorsed the product, Samsung and other phone makers sought after it, as did Google, Amazon, and Microsoft. Nuance experienced rapid growth by attracting these major clients and through millions of individual consumers buying the iPhone app, which became the leading business productivity application in the iTunes store. By 2011, Apple had released Siri, which featured Nuance technology, leading to Nuance’s revenue increasing to $1.7 billion in 2013.

However, this growth was fleeting. Nuance wasn’t alone in recognizing that voice was set to become a primary mode of human interaction with computers and cloud services. Voice recognition evolved beyond merely dictating text to include shopping, information searches, music and video selection, appliance control, and additional functions. It was fast, hands-free, and—compared to keyboards and mice—a far more intuitive way for humans to communicate.

Major tech companies began investing significant resources and talent into this opportunity. Apple focused on developing its own systems, Amazon pursued its Alexa voice assistant, and Google quickly followed suit with its Home Assistant. These firms successfully recruited talent from Nuance, absorbing many top professionals into their organizations. Amazon now employs over 10,000 engineers working on Alexa products, significantly surpassing the number of core R&D staff Nuance had at its peak.

Beyond their financial clout, these large enterprises boasted advantages like extensive customer bases, complementary products, and vast data resources, enabling them to constantly enhance their voice-recognition technologies. Currently, there are 300 million Alexa devices in use; Google processes an average of 5.6 billion searches daily, with half of its users reporting that they use voice for searches. Amazon has developed a robust ecosystem where third-party developers contribute over 100,000 new “skills” to Alexa, covering functionalities from playing specific radio stations to telling jokes. Additionally, Amazon has licensed the Alexa far-field technology to appliance makers, who utilize it for controlling dishwashers, washers and dryers, and vacuum cleaners.

Nuance struggled to compete in this arena. It shifted its focus to niche markets like healthcare before being acquired by Microsoft in 2021.

The situation Nuance faced is not merely a retelling of the standard narrative of large companies outspending startups. Across various sectors, dominant firms are leveraging large-scale information systems to outmaneuver their competitors, including innovative startups. They are utilizing proprietary software to effectively manage complexity, thereby distinguishing themselves from rival firms. This strategy has enabled them to bolster their market dominance and avoid being surpassed by competitors.

In retail, Walmart’s inventory management and logistics software enables it to offer a much wider range of products at lower prices, customize each store according to local preferences, and react swiftly as demand fluctuates and popular items arise. By utilizing vast data systems, leading financial institutions customize credit cards and home equity loans for individual consumers on an extensive scale and subsequently target their marketing efforts.

Even the foremost waste-management companies and health insurance providers are making significant investments in proprietary software to outperform their competitors. Overall, companies (excluding those whose core product is software) now invest over $240 billion annually in their internal software, a steep rise from $19 billion in 1985. Most of that change comes from large firms. Since 2000, the top four companies in each industry, ranked by revenue, have increased their investment in their proprietary software by eight times, far exceeding investments from second-tier firms.

These investments have yielded positive results. Since the 1980s, the leading four firms in various sectors have boosted their market share by 4% to 5%. My findings suggest that the majority of this growth stems from investments in proprietary software.

This heightened dominance of leading firms corresponds with a notable decrease in the risk of being disrupted, a concern that has preoccupied corporate leaders since Clayton Christensen published The Innovator’s Dilemma in 1997. When Christensen wrote his book, disruption was becoming increasingly common. However, since around 2000—when top firms began their spending spree on proprietary systems—this trend has sharply declined.

In a specific industry, the likelihood of a top-ranking firm (based on revenue) falling out of the top four positions within four years has decreased from over 20% to approximately 10%. In this instance, investments made by dominant firms in their internal systems primarily account for the shift. While some new technologies have disrupted entire industries—like how the internet affected newspapers or DVDs—others are now mitigating the disruption of leading firms.

How does this occur, and why does it seem to impact so much of the economy? The answer lies in the fact that these business systems tackle a significant drawback of modern capitalism. Since the late 19th century, innovative companies have often discovered that they can attain remarkable cost savings by producing at a large scale. This transition significantly lowered consumer prices, but came with a downside: to reach those substantial volumes, products and services had to be standardized.

Henry Ford famously asserted that car buyers could choose “any color so long as it is black.” Retail chains gained their efficiencies by offering a limited range of products to numerous stores. Financial institutions provided standardized mortgages and loans. Consequently, products had restricted features; stores had limited selections and were slow to adapt to shifts in demand; and many consumers struggled to obtain credit or received it only under costly and unsuitable terms.

Software alters this dynamic, partly addressing these limitations. This is because it lowers the costs of managing complexity. With the appropriate data and organization, software enables businesses to customize products and services to meet individual needs, providing greater variety or more product features. This capability allows them to outperform competitors, securing a dominant market position. Walmart stores provide a much broader selection than Sears or Kmart stores and respond more rapidly to evolving customer requirements. Sears, once the leader in retail, has been supplanted by Walmart, which now stands in bankruptcy.

Toyota swiftly produces new models as it identifies emerging consumer trends; smaller car manufacturers lack the financial resources necessary to invest billions for such adaptability. Similarly, only Boeing and Airbus are able to construct highly intricate new jumbo jets. The leading four credit card companies have the data and systems to effectively tailor offers to individual consumers, maximizing both profit and market share; they dominate the market.

These software-driven platforms have enabled leading firms to solidify their dominance. They have also hindered the growth of competitors, including innovative startups.

Various pieces of evidence support the notion that startup growth has significantly slowed down. One indicator is the duration it takes for venture-backed startups to secure funding: from 2006 to 2020, the median age of a startup in the seed-round funding stage increased from 0.9 years to 2.5 years. The median age of a late-stage startup rose from 6.8 years to 8.1 years during that same timeframe. Among companies that were acquired, the average time from initial financing to acquisition tripled, growing from just over two years in 2000 to 6.1 years in 2021. The trend was similar for firms that went public. However, the clearest indication of a slowdown is seen when firms become more productive.

The defining characteristic of dynamic economies, referred to as “creative destruction” by economist Joseph Schumpeter, is that more efficient firms—those offering superior products, lower costs, or innovative business models—tend to expand more rapidly than their less effective counterparts, ultimately replacing them. However, after 2000, firms with a specific level of productivity experienced growth at just half the rate compared to those with the same productivity levels in the 1980s and 1990s.

This indicates that productivity now has a diminished impact on growth compared to earlier periods

When productive firms grow at a slower pace, their chances of overtaking industry leaders and displacing them—a key feature of disruption—are reduced. Recent research I conducted alongside my colleague Erich Denk established a direct link between the diminishing effects of productivity enhancement and the increasing dominance of large firms, particularly regarding their investments in software and other intangible assets.

Another perspective, strongly articulated by congressional investigators in hearings and a staff report released in 2020, attributes the decrease in economic dynamism to a different cause: the decline of government antitrust enforcement since the 1980s. According to this view, major firms have been allowed to acquire their competitors, leading to a reduction in competition. Such acquisitions have enhanced the dominance of these companies, especially in the technology sector, resulting in fewer new tech firms emerging and declines in venture capital funding for early-stage companies.

Nevertheless, the entry rate of new tech firms into the market has only slightly decreased from the remarkable growth seen during the dot-com boom, and early-stage venture capital investment is at unprecedented levels, with double the finance amounts today compared to 2006 and four times the investment sum. The challenge isn’t that large companies are blocking startups from entering markets or obtaining funding; rather, it’s that these large entities are utilizing extensive technologies that hinder startup growth. Additionally, major firms like Walmart and Amazon have primarily thrived by implementing superior business models instead of acquiring competitors. In fact, the frequency of acquisitions by dominant companies has actually diminished since 2000.

Certainly, such acquisitions can sometimes influence the startup ecosystem. Some researchers have highlighted the existence of “kill zones,” where Big Tech makes acquisitions to stifle competition, making venture capital scarce. However, other studies show that startups often adapt by shifting their innovative efforts to different applications. Furthermore, the possibility of being acquired by a large corporation frequently encourages entrepreneurs to initiate startups. Indeed, despite the situation with Nuance, the volume of startups focused on speech recognition and natural language processing has quadrupled since 2005, with 55% of these ventures securing venture capital investments.

The deceleration in the growth of innovative startups impacts not just a select few companies within the tech industry; the challenges facing enterprises like Nuance contribute to broader issues affecting the entire economy’s health. Research from the US Census Bureau indicates that the sluggish growth of productive firms accounts for a significant portion of the overall slowdown in growth of aggregate productivity, which gauges the output produced per person in the economy and serves as a rough indicator of economic well-being. My research has also suggested that this trend plays a role in increasing economic inequality, greater social divisions, and the diminishing effectiveness of governance.

What steps are necessary to reverse this trend? Strengthening antitrust enforcement might be beneficial, but the shifts in economic dynamism are more closely related to advancements in technology than to mergers and acquisitions. A more fundamental issue is that the foremost new technologies are proprietary and only accessible to a limited number of large corporations. Historically, new technologies have disseminated widely, either through licensing or through firms developing independent alternatives; this has fostered enhanced competition and innovation.

The government has occasionally aided this process. For instance, Bell Labs invented the transistor but was mandated by antitrust authorities to license the technology broadly, leading to the formation of the semiconductor industry. Likewise, IBM played a crucial role in creating the modern software industry when it began to sell software separately from hardware due to antitrust pressure.

Currently, we observe some parallels even without government intervention. For example, Amazon has opened its proprietary IT infrastructure, giving rise to the cloud industry, which has significantly bolstered the prospects for many small startup companies. However, antitrust legislation could be utilized to encourage or require more large firms to make their proprietary platforms accessible. Additionally, relaxing restrictions imposed by noncompete agreements and intellectual-property rights on employee mobility could further promote the distribution of technology.

Finding the appropriate balance of policies will be challenging, and it will require time—we want to avoid undermining incentives for innovation. Nevertheless, the initial step is to acknowledge that technology plays a new role in today’s economic landscape. Once a driving force for disruption and creativity, it has now taken on a different function.

FredMT Admin Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *