Social media has drastically restructured the way we communicate in an incredibly short period of time. We can discover, “Like,” click on, and share information faster than ever before, guided by algorithms most of us don’t quite understand.
And while some social scientists, journalists, and activists have been raising concerns about how this is affecting our democracy, mental health, and relationships, we haven’t seen biologists and ecologists weighing in as much.
That’s changed with a new paper published in the prestigious science journal PNAS earlier this month, titled “Stewardship of global collective behavior.”
Seventeen researchers who specialize in widely different fields, from climate science to philosophy, make the case that academics should treat the study of technology’s large-scale impact on society as a “crisis discipline.” A crisis discipline is a field in which scientists across different fields work quickly to address an urgent societal problem — like how conservation biology tries to protect endangered species or climate science research aims to stop global warming.
The paper argues that our lack of understanding about the collective behavioral effects of new technology is a danger to democracy and scientific progress. For example, the paper says that tech companies have “fumbled their way through the ongoing coronavirus pandemic, unable to stem the ‘infodemic’ of misinformation” that has hindered widespread acceptance of masks and vaccines. The authors warn that if left misunderstood and unchecked, we could see unintended consequences of new technology contributing to phenomena such as “election tampering, disease, violent extremism, famine, racism, and war.”
It’s a grave warning and call to action by an unusually diverse swath of scholars across disciplines — and their collaboration indicates how concerned they are.
Recode spoke with the lead author of the paper, Joe Bak-Coleman, a postdoctoral fellow at the University of Washington Center for an Informed Public , as well as co-author Carl Bergstrom, a biology professor at the University of Washington, to better understand this call for a paradigm shift in how scientists study the technology we use every day.
The two interviews have been combined and lightly edited for length and clarity.
You tweeted that this paper is one of the most important ones you’ve published yet. Why?
My original background is in infectious disease epidemiology, respiratory viruses. And so I was able to do some stuff that’s reasonably important during Covid. What I’m doing there is really filling in the details in a well-established framework. So it’s more, you know, dotting the i’s and crossing the t’s.
And I think what’s really important about this paper is that it’s not doing that at all. It’s saying, “Here’s a massive problem, and the way to conceptualize it, that is critically important for the future. “
And, you know, it’s suggesting an alarm going off upstairs. It’s a call to arms. It’s saying, “Hey, we’ve got to solve this problem, and we don’t have a lot of time.”
And what is that problem? What are you sounding the alarm bell on?
My sense is that social media in particular — as well as a broader range of internet technologies, including algorithmically driven search and click-based advertising — have changed the way that people get information and form opinions about the world.
And [you can] create an information environment where misinformation seems to spread organically. And also [these communities can] be extremely vulnerable to targeted disinformation. We don’t even know the scope of that yet.
The question we were trying to answer was, “What can we infer about the course of society at scale, given what we know about complex systems?”
It’s kind of how we use mice models or flies to understand neuroscience. Part of this came back to animal societies — namely groups — to understand what they tell us about collective behavior in general, but also complex systems more broadly.
So our goal is to take that perspective and then look at human society with that. And one of the things about complex systems is they have a finite limit to perturbation. If you disturb them too much, they change. And they often tend to fail catastrophically, unexpectedly, without warning.
We see this in financial markets — all of a sudden, they crash out of nowhere.
My hope is very much that this [paper] will sort of galvanize people. The issues that are in this paper are ones that people have been thinking about from many, many different fields. It’s not like these are new issues entirely.
It’s rather that I think this paper will hopefully really highlight the magnitude of what’s happened and the urgency of fixing it. Hopefully, it’ll galvanize some kind of transdisciplinary collaborations.
So it’s important because it says this needs to be a crisis discipline, this is something that we don’t understand. We don’t have a theory for how all of these changes are affecting the way that people come to form their beliefs and opinions, and then use those to make decisions. And yet, that’s all changing. It’s happening. …
There’s a misperception that we’re saying, “Exposure to ads is bad — that’s causing the harm.” That’s not what we’re saying. Exposure to ads may or may not be bad. What we’re concerned about is the fact that this information ecosystem has developed to optimize something orthogonal to things that we think are extremely important, like being concerned about the veracity of information or the effect of information on human well-being, on democracy, on health, on the ecosystem.
Those issues are just being left to sort themselves out, without a whole lot of thought or guidance around them.
That puts it in this crisis discipline space. It’s like climate science where you don’t have time to sit down and work out everything definitively. This paper is essentially saying something quite similar — that we don’t have time to wait. We need to start addressing these problems now.
What do you say to the people who think this is not really a crisis and argue that people had similar concerns when the printing press came out that now seem alarmist?
Well, with the printing press, I would push back. The printing press came out and upended history. We’re still recovering from the capacity that the printing press gave to Martin Luther. The printing press radically changed the political landscape in Europe. And, you know, depending on whose histories you go by, you had decades if not centuries of war [after it was introduced].
So, did we somehow recover? Sure we did. Would it have been better to do it in a stewarded way? I don’t know. Maybe. These major transitions in information technology often cause collateral damage. We tend to hope that they also bring about a tremendous amount of good as we move toward human knowledge and all of that. But even the fact that you’ve survived doesn’t mean that it’s not worth thinking about how to get through it smoothly.
It reminds me of one of the least intelligent critiques of the [Covid-19] vaccines that we’re using now: “We didn’t have vaccines during the Black Death plague. And we’re still here.” We are, but it took out a third of the population of Europe.
Right, so there is pain and suffering that happened with all those transformational technologies as well.
Yeah. So I think it’s important to recognize that. It’s still possible to mitigate harm as you go through a transformation, even if you know you’re going to be fine. I also don’t think it’s completely obvious that we are going to be fine on the other end.
One of the really key messages of the paper is that there tends to be this general trust that everything will work out, that people will eventually learn to screen sources of information, that the market will take care of it.
And I think one of the things that the paper is saying is that we’ve got no particular reason to think that that’s right. There’s no reason why good information will rise to the top of any ecosystem we’ve designed. So we’re very concerned about that.
One important defense of social media is that Facebook and Twitter can be places where people share new ideas that are not mainstream that end up being right. Sometimes media gatekeepers can get things wrong and social media can allow better information to come out. For example, some people like Zeynep Tufekci were sounding the alarm on the pandemic early, largely on Twitter, back in February 2020, far ahead of the CDC and most journalists.
Yeah, to look at the net, you have to look at the net influence of the system, right? If somebody on social media has things right but if the net influence on social media is to promote anti-vaccination sentiment in the United States to the point that we’re not going to be able to reach herd immunity, it doesn’t let social media off the hook. …
I was enormously optimistic about the internet in the ’90s. [I thought] this really was going to remove the gatekeepers and allow people who did not have financial, social, and political capital to get their stories out there.
And it’s certainly possible for all that to be true and for the concerns that we express in our paper to also be correct.
Democratizing information has had profound effects, especially for marginalized, underrepresented communities. It gives them the ability to rally online, have a platform, and have a voice. And that is fantastic. At the same time, we have things like genocide of Rohingya Muslims and an insurrection at the Capitol happening as well. And I hope that it’s a false statement to say we have to have those growing pains to have the benefits.
How much do we know about whether [misinformation] has increased in the past year or five years, 10 years, and by how much?
That’s one of the real challenges that we’re facing, actually, is that we don’t have a lot of information. We need to figure out how, to what degree, people have been exposed to misinformation, to what degree is that influencing subsequent online behavior. All of this information is held exclusively by the tech companies that are running these platforms.
[Editor’s note: Most major social media companies work with academics who research their platforms’ effects on society, but the companies restrict and control how much information researchers can use.]
What does treating the impact of social media as a crisis discipline mean?
For me, a crisis discipline is a situation where you don’t have all of the information that you need to know exactly what to do, but you don’t have time to wait to figure it out.
This was the situation with Covid in February or March 2020. We’re definitely in that position with global climate change. We’ve got better models than we did 20 years ago, but we still don’t have a complete description of how that system works. And yet, we certainly don’t have time to wait around and figure all that out.
And here, I think that the speed with which social media, combined with a whole number of other things, has led to very widespread disinformation — [that] here in the United States [is] causing major political upheaval — is striking. How many more elections do you think we have before things get substantially worse?
So there are these super-hard problems that take radical transdisciplinary work. We need to figure out how to come together and talk about all that. But at the same time, we have to be taking actions.
How do you respond to the chicken-and-egg argument? You hear defenders of technology say, “We’re just seeing real-world polarization reflected online,” but there’s no proof that the internet is causing polarization.
This should be a familiar argument. This is what Big Tobacco used, right? This is Merchants of Doubt stuff. They said, “Well, you know, yeah, sure, lung cancer rates are going up, especially among smokers — but there’s no proof it’s been caused by that.”
And now we’re hearing the same thing about misinformation: “Yeah, sure, there’s a lot of misinformation online, but it doesn’t change anyone’s behavior.” But then all of a sudden you got a guy in a loincloth with buffalo horns running around the Capitol building.
The paper calls for people to more urgently understand the impacts of these new rapid advancements in communication technology in the past 15 years. Do you think that this isn’t being addressed enough by academic scientists, government leaders, or companies?
There’s been a lot of work that’s been done here, and I don’t think we’re trying to reinvent that wheel at all. But I think what we’re really trying to do is just highlight the need for urgent action and draw these parallels to climate change and to conservation biology, where they’ve been dealing with really similar problems. And the way they’ve structured themselves, like climate change now involves everything from chemists to ecologists. And I think social science tends to be fairly fragmented in subdisciplines, without a lot of connection between them. And trying to bring that together was a major goal of this paper.
I’m biased to be very aware of this problem because my job is to report on social media, but it feels like there is a lot of fear and concern about social media’s impact. Misinformation, phone addiction — these seem to be issues that everyday people worry about. Why do you think there still isn’t enough attention on this?
When I talk to people about social media, yes, there’s a lot of concern, there’s a lot of negativity, and then there’s bias by being a parent as well. But the focus is often on the individual-level effects. So it’s, “My kids are developing negative issues around self-esteem because of the way that Instagram is structured to get ‘Likes’ for being perfect and showing more of your body.”
But there’s less talk about the entire large-scale structural changes that this is inducing. So what we’re saying is, we really want people to look at the large-scale structural changes that these technologies are driving in society.