Skip to main content
SearchLoginLogin or Signup

The State of (Scientific) Innovation

🎧 On living literature reviews, disruptions in citations, and creating a narrative (1 hour with transcript). 
Published onFeb 13, 2023
The State of (Scientific) Innovation
·

In this installation of Convos on the Common, we chat with Matt Clancy an economist by training who runs New Things Under the Sun, a living literature review on innovation. This conversation investigates what living literature reviews are, the ever-changing science of citations and disruptions in publishing, and some interesting correlations and emerging ideas within the exciting topic of innovation.

Listen here! Read the transcript below!

or add to your RSS feed via {Anchor}


Introductions

Matt Clancy & New Things Under the Sun

Sarah Kearns 00:30

Thank you, Matt Clancy, for taking the time to chat with me. And I'm excited to have you on the commonplace podcast.

Matt Clancy 00:44

Thanks for having me.

Sarah Kearns 00:47

So how about we start off with you introducing a little bit about yourself?

Matt Clancy 00:52

Sure. My name is Matt Clancy. I'm an economist. I work at open philanthropy, as a research fellow on meta science stuff. But for a year or two now, I've been running a website on PubPub called New Things Under the Sun, which is a living literature review about sort of social science research on innovation and science and creativity and stuff like that. The open philanthropy thing is a new job for me. Before that, I was at the Institute for Progress Think Tank working on these issues. And before that, I was at Iowa State University.

Living Literature Reviews

Sarah Kearns 01:31

Matt Clancy 01:44

Yeah, it's it's a unique term. I don't know if I sort of made it up. People often ask, what is this thing? Is it a blog? And it sort of is a blog and that it's self published writing. But basically, the conceit behind it is that there's all this great academic literature, but especially in the social sciences, reading one study on your own is enough to feel confident in the answers. It's just not the kind of field where you can do one decisive experiment and be done. So you really need to synthesize several lines of evidence from different papers. And this does happen in books or academic literature reviews, but books are sort of far and few in between, and they're really long and they get stale. Like, there's a long lag between when something gets when the research is done, or when the book finally comes out. Then as soon as it gets it's out, it can't really be adjusted unless they do second editions, but you're only gonna make minor changes. Meanwhile, academic reviews are behind paywalls often, and therefore they're sort of geared towards other academics.

The goal for this is to create something that's a lot more accessible to a non-specialist reader, although tons of my readers are other academics. Each article on the living literature review is like, between 2-5,000 words, and it's going to cover some really narrow claim about innovation, like how does innovation work at an office, or what's the role of the office and physically being together in innovation, or what's the impact of age on the style of innovation. I try to focus on stuff that's maybe too narrow for like a traditional academic literature review. Then I try to synthesize, usually 3 to 15 recent articles that are about this.

It's living in the sense that it is continually updated. When new articles come out, I go back and edit and change the articles to incorporate new stuff so that each article is sort of close to the frontier. It's not like I work on it instantly. But you can usually be confident that it's not going out of date as quickly as the book does.

Most people kind of experience it not by necessarily reading the website, but by reading the associated newsletter, which is a sort of change log for the website, and it tells you: here's a new article, and what's the article, and then usually every other month there's a post full of updates of all the new articles that got incorporated into old articles and what was changed. And sometimes, you change your views because the literature evolves. And so I make that clear, too.

Sarah Kearns 04:37

Is it just you that works on the site, or is there like a panel of folks that also contribute to this?

Matt Clancy 04:46

So far, it's just me. That's a little bit intentional. Sort of problems with academic world today is that everybody has incentives to publish new research and new research. To publish well, you have to specialize and get right to the frontier, which means you have to be focused on your little niche. And so it's hard to sort of see the broad space.

What the living literature review is one supposed to make it easier for readers to see what's going on in the broader world of this outside their little niche. But also, I thought if one person is doing this and has all these papers bumping around in their head, maybe there'll be able to do a better job and see interesting connections, compared to if there's a fragmented crew of 100 people who've written 100 different articles, are not tracking the connections between them. That said, I am right now in the process of doing my first collaboration with somebody. But even there, the goal is to collaborate with somebody so that they can help me enter a new area that I don't know quite as well. But I'm still reading all the literature so that we can keep this effective.

Sarah Kearns 06:12

Yeah, that makes sense. I remember when I was in grad school, being so overwhelmed by both individual more narrow studies and also by literature reviews. They do certainly become outdated very quickly, but there's just so much literature to synthesize and it's definitely hard to balance that: trying to get your thesis in this very specified topic, but then also trying to stay abreast of everything that's going on.

How are you like, curating and keeping up with with all this literature? I'm imagining it’s like the red strings meme kind of like everything connected to each other.

Actual footage of Matt writing a literature review (no, it’s the Always Sunny in Philadelphia Pepe Silvia meme).

Matt Clancy 07:02

Honestly, I can't keep up with it all, I do the best I can. I should say, this is not a hobby. At Open Philanthropy, 30% of my time is just on this project. So I've been doing this is like a professional project that I do during work hours, just to set expectations, because otherwise it can be like, how does he do it?

Swimming in the Sea of Literature

But how do I curate and stuff? The basic approach is, I have a Ph.D. in economics and I'm a peer reviewer on a lot of these journals, so I sort of trust myself to do a good job of the curation and reading and deciding what can what goes in and what doesn't.

In terms of staying on top of everything, I subscribe to some services that basically send me weekly emails that everything new, that has certain keywords, or is by certain authors, or is published in journals. Honestly, I get a lot of value from Twitter. I encounter a lot of stuff that wouldn't make it to me based on authors I know or journals, I know or keywords.

I just keep an ever growing list of like topic ideas, so when I come across an interesting paper, I slot it into my list of topic ideas. Then I just work on either what seems most interesting at that time, or more generally, I'm often working on like a larger theme.

One thing I haven't mentioned about these living literature reviews is sometimes I've only done this three times, but I'll write like a longer article that tries to synthesize the articles on my own website into some bigger argument that is that you can't make using just drawing on five to 10 papers or something like that. Right now, I'm sort of in the back of my mind working on a longer term project about theories about why science and technology is getting harder. There are a bunch of gaps in that argument that I need to sort of fill in and so that helps guide me and then also my work. Sometimes there are issues and questions related to science and innovation that I need to sort of solve for myself. As a general principle, I'm trying to find questions in articles that I think are directed towards some kind of instrumental value to some degree. Sometimes I do write just because I think stuff is interesting, but I like working for Open Philanthropy or the Institute for Progress, because these are both organizations that use academic research to accomplish something. And I like that little tug towards relevance when trying to sort of think through and set agendas.

Sarah Kearns 10:18

You mentioned, that sometimes it takes some time to put ideas together so that I'm a little curious on — as editor and writer myself — the timeline of when studies are coming out, and when you're synthesizing and writing them. Because I appreciate the slow journalism approach of determining what lands what sticks.

Creativity and Expertise and Trust

Maybe that speaks to a bigger, question of the role of expertise or something or creativity in while you're synthesizing these things? I guess based both on your experience in economics, and getting a PhD being a peer reviewer, but also, is there any, like research about the role of expertise and creativity? And like the synthesis of these innovative research topics?

Matt Clancy 11:25

The closest thing I can think of is, this is not necessarily an expertise per se, but I have views on the internet world and Substack. When it's one person behind a project readers either trust me, or they don't trust me. They sort of know what they're getting, which would be less true if it's somebody different every week. Because everything is up there, you can kind of see like a very long back catalogue and gauge, does this guy know what he's talking about? He's always written like, 100,000 words and I can see the part that I know best and sort of see that he did a good job.

Going back to sort of bigger question about creativity and expertise. I mean, one thing that I'm influenced by is this idea that a lot of really important breakthroughs come from making connections between fields that have not previously been connected. There's actually reasonably good academic work on that. People will look at patents that cite unusual combinations of technologies or are classified by the patent examiner office as being a safe, attractive patent, but also like a neural network patent, which are like two kinds of technology that hadn't been attached to the same patent before. Those kinds of patents — the ones that make weird combinations — are more likely to be highly cited and perceived as highly valuable. We have similar findings for act in the academic literature about papers that make weird combinations or cite strange combinations.

That's another reason I thought this was a useful project: I can help other people make those connections with the literature that I know best, but then also, I was hoping that I can maybe make some of these connections to.

Sarah Kearns 13:29

I want to talk about citations with you. But I think I'm gonna put that on hold for a second, because you mentioned Substack.

Creating Networks and Responsibility

Maybe I'm just slow to get into that world, but I feel like it's like booming right now. And it's really interesting that, perhaps in response to bigger media's, there's this — I don't want to say fragmented, but — individual or decentralized or personal or something. That's an interesting contrast the role of institutions and organizations and peer reviewed journals. Both have their place, but I guess I'm curious as to what you think, sort of like the differences are and how they sort of fit together?

Matt Clancy 14:20

I'm not gonna say that Substack is better just that it can do different things. And since it's new, it's the place where a lot of new stuff is happening.

We've had big institutional publishers for a long time. I think when I started this project, it was certainly very useful to know that Substack had this model of like subscription based stuff where individual writers could support themselves. It turns out I've never had to go that route and everything I've done has been sort of either professionally supported or supported by grants, which is what I would prefer because I'm trying to sort of disseminate knowledge.

Substack also has this recommendation system that it turns out works really well for attracting people to your readers, or attracting people who you wouldn't normally be able to attract. I have well over 10,000 people who subscribe to the newsletter version, and a big chunk of that comes from the other people who write newsletters, recommend my newsletter, and people find me that way.

To your question about sort of individuals versus institutions, I think one reason is that it’s important for this project to be the work of one person. My reputation is all tied up in it, so I have strong incentives to do a good job and I have a strong sense of sort of ownership. You could imagine another world where I'm just updating Wikipedia articles to a very high quality standard. And maybe that maybe that would be actually better. But I suspect that I would lose interest or like other things in my life would come along, and it would always be just at the hobby side. Maybe, maybe not. You know, I'm an economist, and we focus on people being selfish, and so on. So I think this is the sort of selfish angle is if it's all tied to an individual and not an institution, then you kind of have reputational ownership of it to some degree.

Sarah Kearns 16:49

Yeah, I think that that makes sense that here's an ownership to it. So there's more of the drive to steward that information and your name is tied to it. In Wikipedia, I guess you can see that track changes, but it's not that same visceral byline.

But that's interesting, what you said about Substack having the recommendation system. It almost sounds like, even though it's still decentralized (I don't really know if that's the right word), that is still important to have networks and people like creating those recommendation system and credibility.

Matt Clancy 17:44

It's surprising. This feels like a Substack podcast now, but I was attracted to them because basically you can download all the people who subscribe to newsletters, so you can basically walk at any time so f they change their policy, you can take your list of subscribers and go. So it seemed very safe, and it's free.

At the beginning, a lot of people were using a lot of different platforms, but everybody I've seen has sort of migrated to Substack because everyone has realized that there's this advantage to being in that recommendation system, which is this sort of self perpetuating feedback cycle where because people want to get recommended by other Substack people, they join. That just makes the pool of people who can recommend you bigger and bigger.

Sarah Kearns 18:54

There's something to say there about free infrastructure. Twitter is free, and Substack is free. I mean, Twitter probably sells your data, like all the other platforms do, but like it's effectively accessible and like a free public infrastructure.

Citation Motivations

That want to be recognized, though. I'm trying to tie this into citations, because I feel like there's like common thread there. Oh, I'm recommended by this person, or Ooh a bunch of people retweeted me or Ooh, a bunch of people cited my paper. So that feels like there's this motivation to be cited that’s this universal thing, just across different media platforms.

Matt Clancy 19:44

Right. Tons of this academic literature about the economics of science and innovation rely on the sites like how frequently a paper is cited as sort of a rough proxy for how good it is or how important. And so since I write about this, like, all the time, I wrote a special article just basically about citations. My conclusion is, with caveats, if you're comparing a small number of papers, they're probably not a useful. Like, if you're comparing trying to decide between a handful of papers which is more impactful based on their citations, there's too much noise to determine that. But if you're trying to compare average traits across like millions of papers, then the law of large numbers starts to give you better answers.

The basic argument is if somebody cited you, then that's a signal that they kind of engaged in your work to some degree and your work maybe had some influence. In surveys where people ask scientists, “you cited this paper, how important was it for your research project?” it can be all the way from like, my paper would not exist if not for this paper because it created a new research technique, or it just impacted the sentence the citation was in but nothing else about the paper would change. By that metric, 20% of the citations people make are influential or very influential. That means that 80% don't really matter that much. So there’s a ton of just citing all the literature because people feel obliged to cite it. But still, when you have millions of stuff, that 20% starts to sort of like that signal begins to pick up stuff.

The way I think about it is like there could be the inherent value of your research contribution in inherent sense. For example, if you had an objective and group of unbiased people who are to evaluate your contribution, they could say whether this is a great contribution or not. Citations are not the same thing as that. Citations is like not a bad measure of impact, because it's measuring how many people read and cite your paper. But you could imagine two papers that are have equal inherent value if they were being read by unbiased people, but because we don't live in an unbiased system — maybe the one with the more famous academic or maybe like, was by a minority author or so on — one of them could end up getting a lot more citations. And in some sense, it's maybe accurate that the one that gets more citations is also having more impact. But it's not to say that’s a fair or good system.

The long story short is like how people retweeted this sort of social information you get from people, citing your work is like an important signal. And social media as citations are weird, because also highly cited work is easier to discover. Because when you're just you pick any random paper, and you look through its citation list, if it's a highly cited paper, you're more likely to find it and then and then read it.

Sarah Kearns 23:28

Yeah, I think I've heard this thing, I think it’s called the Pareto Principle that goes along the lines of: once you publish a book, you're more likely to publish books that more people will read. And just because you have your foot in the door means it's likely you'll keep having those chances to succeed. That may speak to being a famous author from Harvard, or somewhere fancy, means you’re more likely to get noticed, because you’re what people consider prestigious. But it's also it's interesting, I feel like that 80-20 statistic comes up in a lot of a lot of circumstances.

Matt Clancy 24:20

Yeah, the Pareto Principle is like 20% of the highest 20% of cited papers get like 80% of all citations or something. I don't know if that's exactly the numbers for academic citations, but it's got the same flavor where it's like a small number get most of the get most of the cited stuff.

One interesting wrinkle on all this is like you could imagine that what this implies is that highly cited papers are just getting extra citations because people know about them, but they're not really engaging with them. And so you kind of discount highly cited papers. But like the kind of sad truth is that like these surveys indicate actually the opposite that. If I asked: you cited these two papers, how influential were they? If one of them has a lot of citations, and one of them doesn't, you're also more likely to say it was the more cited one that’s highly influential. I think that speaks to sort of this discovery thing, where highly cited papers are more likely to influence people. If it's highly cited, it's easier for you to discover, which means you may already have known about it, you read it in grad school, and so like, and you discovered it before you started your paper. And so it it had a chance to influence you. Whereas if it's the other paper, maybe you were like doing your due diligence at the end to make sure I cite everybody. And then you cited it, but it didn't influence you.

I still feel comfortable using them for the kind of academic studies we cover where it's like, usually people are looking at 1000s or millions of papers. But whenever possible, it's just like the synthesis idea. I like to try to like draw in other lines of evidence that don't depend on citation. That is not as frequent as I would like, though.

Beyond Citations

Sarah Kearns 26:38

What are those other metrics other than citations that you're trying to identify?

Matt Clancy 26:47

So I just wrote a paper, or just wrote a post yesterday that was about academics was about scientists who move abroad, and kind of looking at the impact of migration on on innovation. They looked at how many papers you publish, how highly cited they are, but one of them also looked if they were invited to be a speaker at the International Congress of Mathematicians, because this is a paper about mathematicians. Other papers have looked at papers that are highly influential, because more papers that come after them start to use similar language, for example. You can sort of see that the footprint through the statistical language patterns and stuff. You get more on the technology side, like people look at things like did the FDA give you a new drug? Was there a spin or a commercialization of this thing? At very broad levels, when you're measuring the sort of impact of big fields of science, people will look at cancer research and the number the mortality rates of cancers and stuff as a way to try to sort of assess if we're learning useful stuff. All of those have their own challenges is, again, why come back to this social science, where you’ve got to sort of weave together different threads and, and hope that they all have imperfections that are not lined up, so that you know, they can kind of cover each other's holes.

Sarah Kearns 28:33

Yeah, that makes sense, like these different metrics are orthogonal. They they're not going to say the whole picture in and of themselves, but they're compatible, or they all explain different aspects of the space.

You mentioned that these things are sort of seems harder to measure, like the impact of displacement or tracking how people are using language? Is that sort of why citation is are just commonly used as a standard because it’s easier?

Matt Clancy 29:10

Yeah, I think that's the data availability is the big reason that people use citation metrics. They're also in the self perpetuating cycle where because everybody else uses them, so you can kind of use them and not get to criticize. When people have looked at highly cited papers they find that these are texturally influential papers too, there's a positive association. Not that it's like not super strong. The papers that are highly cited but don't influence texts are kind of interesting, and vice versa. For example, papers that introduce a new data set, will often be highly cited, but they don't have much impact on the language people use.

Other people have looked at peer review reports. They get access to the recommendations from peer reviewers for journals. And it's all anonymized, but it’s about whether we should accept, rejected, or request revisions. And they look at the association between the peer review decision and how many citations papers go on to get. Those seem to be partly positively correlated, the peer reviewers and their consensus view is reasonably aligned with whether the paper also goes on to get lots of citations from other people.

The last place that people have looked is at patents. And I think this one is kind of interesting at least for STEM fields where you're thinking that the the underlying science might generate new technologies and stuff, people will look to see if highly cited, academic papers get are more likely to be cited by patents. And they are, they're a lot more likely. Now you could imagine that this is just this discoverability problem again. But it also seems to be true that the patents that cite highly cited academic work, are themselves more valuable patents by a lot of different metrics. That, I think, is like evidence that the underlying invention is probably better because the science from the highly cited paper was also more innovative or did something new.

You do come back to the fact that only 20% of the citations are rated as influential at all. For my job, where I'm studying papers in the 1000s range, but like, if you're trying to assess somebody for tenure and just look at their citation count, it's like to smallest sample to really be meaningful.

Sarah Kearns 31:58

Yeah, there's definitely, I mean, I'd like to think that there is a shift happening, at least at that more granular level of tenure and promotion. Do you see that changing? Or like, how do you see either citations or these other metrics having an influence in that prestige or tenure promotion?

Matt Clancy 32:35

I don't have great insight, I guess I don't see them going down that much. I think there is more awareness, growing awareness, at least of some of the biases. There are studies you can look at now that have tried to investigate, you know, biases about work by people and how it gets penalized if the author is like a woman or something for citations. But there's not a good article I can point to people now.

But still, there's a difference between writing a paper and documenting a fact. I don't know exactly how it's affecting how tenure committees are actually reviewing people's stuff. There's this long tradition of soliciting feedback from other academics, who hopefully are familiar with these people's work at a at a more sort of qualitative, holistic level than just a numerical, quantitative level.

What is innovation anyway?

Sarah Kearns 33:47

Yeah, that makes sense. I think I agree with that. I guess maybe there's something that I feel like we talked about a lot, but I haven't actually, like clarify what it means. What does innovation mean? What are you like looking like researching in particular here? Maybe we should have start with that question.

Matt Clancy 34:08

There's like technical meanings of the word innovation, but when I use it, I do mean it in kind of the colloquial sense that I think most people mean because I am trying to trying to write for non specialists. I just mean: doing new stuff. I technically would think an innovation that I'm writing about is are like, in the process of innovation I'm reading about is like stuff that's new, and it's interesting. It's not just like, you know, like throwing paint on a wall is maybe the pattern is new in the sense that hasn't been done before. It's not interesting. And then like, There's one interesting element to which is if it's repeatable. So like, you can do something new and interesting, but it's not. But it's sort of a one off innovation is about like creating new blueprints and new ideas that others can then reuse and sort of perpetuate forward. So that's kind of what I mean, I use it as kind of a catch all for the process of invention, scientific discovery, occasionally I write about, artistic creativity. And I guess a little bit of like entrepreneurship to is sort of gets in there.

There are technical meetings about like, what is invention versus innovation, what is basic science, applied Science, technological development, and that's all very useful for communicating with people in the same niche who have like a shared understanding of what the language means, but I'm trying to communicate outside the niches so I use what I perceive to be like common language. So that, which is I hope, like the shared language of all the academics and policymakers and interested amateurs and so on who read my work.

Spanning disciplines

Sarah Kearns 35:56

Yeah, so it sounds like the the innovations that you're talking are looking at bigger trends across different disciplines or and then also writing the these living literature reports. You don't need to be an expert in economics, you don't need to be an expert in engineering to glean something from these these reports. I feel like that's a not easy thing to do to sort of span disciplines like that.

Matt Clancy 36:46

Yeah, I think I think it's not easy. One reason I started the project was because I was talking to a famous academic, who was writing a book and she was writing this book for about innovation. And I was like, Oh, is it going to be for the general public or is it going to be for academics? And she said, it's going to be for academics, it's actually really hard to write for the general public in a good way. And I was like, Oh, I feel like I can do that.

Maybe it's not as common. I think economics in some ways, lends itself to this kind of writing in that it's about topics that are human-scale. We've sort of been in these situations, or we can imagine ourselves in this situation is not quite as alien as writing about, the movements of proteins or something. A lot of modern economics is based on these natural experiments are kind of stories that you can tell that make sense.

Incorporating narrative

As one example, there's a paper about the impact of getting a local library on innovation. And so the way they measure this is they identify this unusual historical episode where like Carnegie — the famous philanthropist rich person — built libraries across the United States. For places to get a library, they had to apply, and then their application would get reviewed, and then they'd be able to get one or not. This paper claims that you can't just compare towns that libraries to towns that don’t, because the people who request a library, that says something about them: they're like a functioning town interested in improving themselves, they're ambitious, maybe they're also investing in schools, and who knows what else. They find this subset of these library applicants who applied for libraries and got accepted, but then they turned around and they rejected the library from Carnegie. At the time, Carnegie got involved in putting down a strike. I don't remember the exact details, but it was very brutal and very polarizing. So some people didn't want to have anything to do with Carnegie anymore, and other people took the library. So now you do worry that there's maybe a difference between townspeople who were morally outraged and those who weren't. This paper though basically compares the towns that got the library to the towns would have got the library except they changed their mind. The ones that got the library had like 15% higher patent rates than the ones that didn't end like the decades that followed.

I can tell that story and it's enjoyable to hear about it like that: a story. If I was just like, “they have a statistical model and they control for all these attributes,” I think that is one way to share it. I always do try to not have an appeal-to-authority type papers. I want to tell you how they got the data, what they're thinking so that the reader have a sense of where the authors are coming from, what you would need to believe and to disbelieve their results. I think that it gives us a sense of the fragility of the results often, which is just as important as the results. With that, you now have a sense of how hard it is to answer this question. At least you can tell people have sort of made a good faith effort and tried hard, and tried to be thoughtful about it.

Sarah Kearns 40:55

Yeah, I think that's all important in I think telling any story, a scientific one or otherwise to include that nuance and narrative of it.

Matt Clancy 41:10

As an aside to this is the kind of thing that gets dropped out of academic literature reviews, because I think academic literature reviews are trying to be efficient and their goal is more to be comprehensive and say, “here's what everybody's working on and if you’re interested go read the paper.” I don't necessarily have the mindset that the typical reader of my thing is going to go back and read the paper. Also, I've cut it up into smaller, bite sized chunks — they're not really bite size — they're not like 30 page literature reviews, so we can spend a little bit more time with five papers, then having three sentences trying to summarize 50 papers.

Sarah Kearns 41:55

I certainly remember reading a bunch of lit review papers for grad school and being frustrated that I had to go and read another paper. It’s like, “man, I'm reading this one, but now I have to read three to five others, just to understand what’s happening. Can you just explain it better here to me, so I don't need to go and do something else?”

More living literature reviews

Matt Clancy 42:14

Yeah, I mean, sounds like there needs to be a living literature review on whatever topic you're doing in grad school.

Sarah Kearns 42:18

Yeah, microtubule modifications, haha. Pretty niche.

Matt Clancy 42:23

I should say, that's one of the reasons I took the job at open philanthropy to is to try and create more living literature reviews. Because we're a grant making organization, this is the thing I hope to work on: providing more financial support for other people to be able to spend part of their time doing this.

Sarah Kearns 42:40

Yeah. Are there other examples of this that you have seen or help facilitate?

Matt Clancy 42:46

Not yet, not. I mean, there are definitely academics who write about their work, and who do living literature who do literature reviews. They write Substacks, and they write posts that synthesize stuff, but I don't think there's a good example of like another person who's got this living literature review angle where they're like going back and updating things.

Sarah Kearns 43:13

It almost makes me want to do this for my former self. But I don't I don't have that same support that it sounds like you do.

The Decrease in Innovations

There's something else I want to go back to regarding innovation.

I noticed in my sphere that there’s increase of studies and articles describing the decrease in innovation, especially in science. So I guess I'm wondering if you've noticed that. And something that I've picked up on intuitively or emotionally (and maybe that's not the right place for science, but) that the headlines make it feel like it’s bad thing that innovation is is decreasing, and that we should be really concerned that innovation is plateauing.1

Matt Clancy 44:17

I think like the nuance I would put on it is that the evidence is pretty clear and consistent that innovation is getting harder. So a study with the same level of impact comes less often, but that’s not necessarily true. Innovation is getting harder, but I actually don't think the evidence is nearly so clear that innovation has slowed or is plummeting. Things can get harder, but if you work harder, you can keep the rate up. The amount of resources we pour into R&D has gone up a lot.

There's one example you're probably thinking of as a lot of people have seen this recent paper about the decline of disruption in scientific literature. And so I wrote about that paper, there's an article on my living literature review called “Science is getting harder,” and it covers this plus like four other lines of evidence that just are completely different than this disruption index. This paper describes a way of measuring how disruptive a paper is. They call it disruption, and what it is, is when people cite your article, do they also cite the stuff that you cite? And if they don't, then your article is rated as more disruptive because your article rendered all the previous work obsolete, nobody's decided anymore. The average level of disruption has gone down a lot over several decades (we don't have data before 1960). But the thing that's kind of surprising, is, if you actually just count the number of very disruptive papers, it's not very different than how it was in the 1950s. So like the number of papers that this metric calls very disruptive, it's about the same today that came out every year as it was in the 1950s. It's just, in addition to those, we also have like 100,000, more papers that come out today than came out then. Which makes it feel like the hit rate is falling a lot.

I think it's not a good trend. I tend to think that there are just features of science that make this kind of a natural thing that occurs. It does suggest that there are ways that we could reverse it, like if there are policies we could implement that would make the average piece of science more high impact. I think that would certainly be valuable to do. And that's sort of the kind of stuff we worked on at the Institute for progress. And so another part of what I work on it open philanthropy, like how do you? What kind of reforms can you do to make to just increase how well science functions?

Changes in citation practices

Sarah Kearns 47:58

I see more like micro publications where people are just like, here's a data set that you can cite, or here is like a figure that I made that you could cite, I wonder the role of like those, because like you said one data set might not be disruptive, or like one piece of data might not totally change how something works. But there's just more little pieces like that.

Matt Clancy 48:29

Yep, so that's one explanation, that the minimum publishable unit has gotten smaller over time and so you've got a lot more little stuff. We also have a lot more bodies in science. This is a problem a paper has to contend with. Changes in kind of the not so much the size of the publishable unit. But the the big problem that a paper has to contend with is changing norms about citation over time.

Over time, people have begun to cite a lot more papers, and they tend to cite older papers than they did in the past. It's a little bit different difficult to disentangle what that means. That could be just like, because Google makes it so easy to find relevant work. It also makes like a disruption index. If the only thing that changes you start citing more stuff, then it's more likely you also cite you accidentally land on one of the antecedents of this journal and so it will look less disruptive.

The fact that, overtime, people have been disproportionately citing more old older work at a higher rate. So the share of citations from five years has fallen pretty steadily for decades now, and that’s little bit more ambiguous as to why. Google Search makes it easy to cite old stuff and maybe they’re now in the public domain, and peer reviewers are insistent you cite their work. But it could also be that this is a sign that newer work isn't delivering quite the same level of insights as like these older papers.

There's also evidence from text that the level of exploration and science is slowing. The number of unique phrases that are in a sample of paper titles has sort of plateaued and even begun to die. It's only when there's a bunch of sort of different pieces of evidence that all point to this issue that something is going on.

I don't think we're at the end of science, or there's nothing left to discover, or our system is so broken that we can't even do science. I don't think it's anything that dramatic, but I think there are problems that are worth trying to solve.

Sarah Kearns 51:19

I would say that Thomas Kuhns work is influential to me, and from that I would say that it's fine to be in a paradigm of normal science where things are predictable, and maybe that's not such a terrible thing that innovation isn't happening as much. But I certainly do feel that that's it's not great to have that stagnation especially when the world's changing like crazy.

The Future of Innovation

So maybe that’s a good wrapping up question — and maybe it’s too big to wrap up with, but I'm gonna ask it anyway — what is the future of innovation? You mentioned something about policies and social norms being instituted. What would those policies be to increase impact? And “impact’s” various meanings not just through citations.

Matt Clancy 52:30

It's a very big question. I'll just pick a couple nibbles around the edge of it. This whole field of academic research about science is really kind of exploding. Computers have allowed us to work with large datasets and particularly text based documents in a way that was really challenging 20 years ago. The growth of this field is leading to more energy and excitement about trying to use the science of science to reform how how science itself has done is like really looping in on ourselves a bit here. For a simple example, trying to get science funding agencies to use randomized control trials on their own internal processes, how they disperse grants, how they pick, peer reviewers, how they just all these different things that we sort of done for a long time. We don't really know if the way we're doing them is good or bad. And you can ask questions like: What if we made it peer review variance based so we we prioritize stuff that some people think is really good, even if some people think is really bad? You could do an experiment where you basically see does if it generate different kinds of research proposals. Does that generate proposals from different communities? Does it help pick different winners? Does that stuff end up to go on to have more impact measured in different ways? Other people want to try like using lotteries to disburse grants and just do away with the whole current process. So these are the kinds of things we don't have really good evidence yet about if those methods would be better or worse, but there is a new interest in sort of exploring that stuff.

There also was the whole replication crisis in the social sciences, and things like the Center for Open Science have tried to come up with proposals about reforming science in the sense of having pre-registration and maybe of creating new journals where you can publish null results. So all these kinds of reform efforts are also trying to be evidence based, where they're trying different interventions and experimenting with them in different ways.There's a lot of energy on reform within science that uses the study of science to help design better policies.

AI and Machine Learning

The other big thing is the impact of artificial intelligence, machine learning and all that. That's still early days, and it's not 100% clear where it's all gonna go, but it has to be mentioned, because it could be that in 20 years, we look back and we're like, that was what really mattered, that created a new era. You do have some examples of like fields where, like with AlphaFold in structural biology, the machine learning has changed how things are done in fields. And there's new tools like ChatGPT. I don't know, maybe in like 10 years or less, someone will be able to just type into ChatGPT-700 asking it to summarize and synthesize what academia knows about any topic, and it will give you a top job that's basically as good as I can do. So that's kind of stuff I see about the future. Hopefully, that's a little taste.

Sarah Kearns 57:23

Yeah, certainly a teaser. Yeah, my background was also in structural biology too. AlphaFold was emerging just as I was finishing my PhD, and I'm like, phew, I’m kind of glad to not have to compete with that.

Matt Clancy 57:36

I was talking to a structural biologist, and he was telling me that this came right on top of like, five years earlier that been this cryo-EM based revolution in structural biology. I was like, oh, man, what a what a rocky road for that field are ups and downs.

Sarah Kearns 57:50

Certainly exciting. I'm definitely excited to see how these emerging data about publishing lead to more of these types of experiments. Thank you for taking the time to talk with me, man. I really, really appreciate it.

Matt Clancy 58:25

Yeah, well, thank you for having me on. And yeah, I guess like, if you want to know the what's happening to the future of innovation, just just like, follow my living literature review website, because it will be updated as things change. Thank you very much for having me on.

And also PubPub is great. I'm really happy that the New Things is on PubPub. I looked around for a platform that would like facilitate this updating while archiving older versions so it didn't lose everything when I made changes and and and I found PubPub, so very happy with it.

Sarah Kearns 59:04

Well, woo hoo to free infrastructure!

If you’re an academic interested in starting their own living literature review on their own topic, Matt would love to hear from you! His work email is [email protected].

Comments
0
comment
No comments here
Why not start the discussion?