BEcurious

BEworks Conversations with David Rand: Behavioral Science, Theory and Application

Written by Kelly Peters | Jul 6, 2020 2:21:26 PM

David G. Rand is the Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences at MIT Sloan, and the Director of the Human Cooperation Laboratory and the Applied Cooperation Team. 

David’s work in behavioral economics and psychology uses a cognitive science perspective grounded in the tension between more intuitive versus deliberative modes of decision-making, and explores topics such as cooperation/prosociality, punishment/condemnation, misinformation, political preferences, and the dynamics of social media platform behavior. His research has been published in leading academic journals, like Nature and Science, and he has authored pieces for popular outlets, like the New York Times and Wired. His work has garnered widespread attention in international media, and David was named in Wired magazine’s The Smart List 2012: “50 People who will change the world”, amongst many other awards and honours for his research.    

In this conversation, Kelly and David discuss the importance of bridging the gap between theory and application in the behavioral sciences, misinformation and polarization in the context of a pandemic, and much more. 

 

 

 

David: David describes the beginnings of his scientific career in systems/computational biology, and the evolution of his work on cooperation, which fused economics with moral psychology theories and paradigms to build a body of evidence that people’s intuition is typically to cooperate rather than be selfish. He concludes: I think that there are really huge opportunities for productive new ideas from bridging fields, and taking tools from one field and tools from another field and putting them together, I feel like that's really the essence of a lot of what I've done.  

Kelly: The other vector in this is applying research that's happening in academic settings to contemporary, real world challenges, and that's yet another bridge.  The process of science has no immediate obligations to "serve the world today". It's to build knowledge. I'm not inferring it is knowledge for knowledge sake, but building knowledge is a lot of work. Making that next bridge to solving real world problems is a whole other set of tasks that need to be completed, and yet you're also very interested in that bridge between work in a theoretical lab space, but also applying that to real world challenges.  

David: Yeah, totally. That's something that's very much evolved over my career.   

At the beginning of grad school, I very explicitly said, ‘I just want to do things that I find interesting, and I don't really care about the application, the utility of it… Then, I was feeling more and more, ‘Well, fine, this is fun to do this random stuff, but if I'm putting all this time into it, it really should be something that matters.’    

I think also what was nice about both the prisoner's dilemma stuff [paradigm used in David’s cooperation research], and then the misinformation work that we've been doing more recently, is that relative to lots of other things you could be studying, the gap between the theory and the application is actually quite small, so it seems natural to try and bridge that gap.   

I think also what I came to appreciate is that it is also really scientifically interesting and challenging to do that translation. It's not like well, we showed something works when people play a prisoner's dilemma for $5, so alright. All done, we understand that one. That's a little tip of an iceberg.   

If you really understand the psychology of what's going on, that means you should be able to implement it. Conversely, if you can't successfully implement it, that means, in the real world, that there are important aspects of the psychology that you didn't understand.  

There's the sense amongst those doing basic science that you do the novel interesting research in the basic science and then you do the boring, obvious translation of ‘well, we just apply it, this thing we already know, we just apply it in some in some real world setting,but it's totally not like that. The only person that would say that is someone that has never tried to do the translational application work because it's not that easy. You really have to understand the psychology in order to figure out how to successfully apply it.  

Kelly: That's fantastic. Obviously, I'm a huge believer in the same thing that you are, about, first of all, a fascinating aspect of applying knowledge to real world challenges is that it also strengthens the quality of that science. You can't just answer one little question and say, as you know, Okay, there it is. We've contributed. The science itself advances so much through that process of application.  

I think we face some big, huge challenges the other way, where science, and scientific institutions, and even the word, that's academic, all are pejoratives. In the business world, being called an academic is not a neutral statement about one's profession. It's actually derogatory because what it means is that one is theoretical, out of touch, even misinformed.  

David: I think, obviously, the fact that most of the academy doesn't value translational or applied work is, I think, the flip side of the fact that the people doing the applied work don't really take theoretical work that seriously. I don't know, I could be wrong, but my guess is that if academics took the translational part more seriously and cared about it, then there wouldn't be as much of a  poo-pooing of academia in the other direction.    

I think that, essentially, a lot of that is actually fair. There's a lot of people in industry being critical of ‘that's academic’; it’s not unreasonable because I think a really large majority of what is happening in the academy is really extremely basic science research, and it's not at all clear whether it translates or not. People, as we were saying, aren't really interested in it, in that question of whether it translates.    

So, if I was a person that was running an organization and trying to make decisions, I would want to know more than this thing happened when I had somebody play a prisoner's dilemma for $1.They'd be like, ‘Okay, let me make business decisions based on that’.   

So, I think that it's a two-way street in the sense that it is actually theoretically important for academics to go out and understand things in the real world.  Also, if academics want to have an impact in the world, or want to have the things that they're learning be useful, then it's also critical that you go out and do things in the real world because people in the real world shouldn't take it seriously if you don't. 

Kelly: We've got a very complex set of behavioral challenges in front of us right now around the public health challenges with COVID-19. One of them, for instance, is the use of masks. I would love to hear your thinking there. What are the things, particularly, that might help us to understand people's reactance to it? Also, what should public health leaders try to do to help people who have maybe different views of the implications of mask wearing?  

David: There's a lot of different things going on in the COVID-19 context. I think there's two different parts of it that we've been thinking a lot about. One is misinformation.   

In these public health challenges, it's particularly important for people to have accurate beliefs and accurate understanding of things because if they don't believe the right thing, that can obviously lead them to make decisions that are not good for society. So, we've been doing a lot of studying of COVID-19 misinformation on social media. This is a natural extension of work we've been doing for the last three or four years on political misinformation, and why people believe it and share it.   

The basic thing that we found over a whole lot of studies is that, in general, people want to have accurate beliefs and people want to share content that is accurate, or, let's say, they do not want to share content that's inaccurate. It's not this post-truth world where nobody cares about accuracy anymore and people are just perfectly happy sharing all kinds of things that they know are false because it advances whatever agenda they have. That is, there certainly are some political operatives that are doing that, but the average person is essentially doing the best they can with what they have in front of them.   

The problem is that, in general, and in particular on social media, people tend to be distracted from thinking about accuracy. When people stop and think about accuracy and think carefully about it, a lot of the time, they can actually do pretty well. The problem is that people don't do that, or, let's say, rarely do that. I think the social media context, in particular, biases people not to be thinking carefully because you're scrolling quickly through a million things. You've got news related content mixed in with pictures of cats, and babies, and all kinds of things where thinking rationally or carefully doesn't even make sense.   

Also, a lot of times people get on social media because they want to relax and unwind and not think carefully about it. I think that predisposes people to, essentially, forget to think about is this accurate or not?’ before they decide to share it. What we've been doing is providing evidence for that account of why people share misinformation, and also evidence that it's actually the dominant reason. It explains a lot more of sharing than purposely sharing misinformation, or at least in our experiments, more than confusion. The implication of that is that if you just nudge people to think about the concept of accuracy, then it makes them more discerning in their subsequent sharing. If they think about it, they can typically tell what's true, and they don't want to share stuff that's not true. The problem is them forgetting to think about it. So, you get them to think about it a little bit, they're more discerning in their subsequent sharing.   

We show this in survey experiments. We also did a big field experiment on Twitter where we messaged thousands of users, asking them to rate the accuracy of a single random headline, not anything that they'd been shown before, just some random thing. The idea is that reading one headline for accuracy makes the concept of accuracy top of mind. Then when you go back to your feed, you're more likely to be thinking about accuracy. We found a significant improvement in the average quality of the news that they shared after receiving this nudge.   

This is something that we're talking to various different social media platforms about, Come on guys, you should be so happy about this. It's a way to improve quality that doesn't rely on a centralized authority saying this is true, this is false, and so on, as it gets people to take on a little bit of that burden themselves.  

I think in the COVID context, it's particularly hard because it's scary. When you get people into this emotional mindset, they're really anxious. That, again, makes you less likely to stop and think carefully about it. I think one of the challenges there is just getting people to slow down and think carefully. Another major challenge is that one source of misinformation is weird random websites, conspiracy theory websites pushing, or supplement manufacturers pushing bogus stuff. It's a very different thing when you have the President of the United States, or top party leaders of your party, directly stating misinformation. We've found that really undermines people's ability to tell what's true or not. If you have elites of your party espousing misinformation, it makes people much more likely to believe it, even when they stop and think about it. That is not really something that we can directly act on.    

I think that a huge problem with COVID is that the government in the US, in particular, has really been bad about providing misinformation, and conflicting information, and not having a set story. Obviously, as the science changes, you want the narrative to evolve in response to the science, but that's not what's been going on. That's also huge in the face mask context. There were some serious problems with the messaging.  

The extent to which you thought COVID was dangerous to the community was a much stronger predictor of your prevention intentions than the extent to which you thought COVID was a danger for yourself. Accordingly, the messaging that emphasizes 'don't get other people sick' was as good or better than the messaging, ‘don't get yourself sick,’ which is surprising if you're not thinking about this from a social perspective. If you're thinking from just a standard, self-interest perspective, it should be,well, don't get sick. That should really be the thing that should be the most motivating, but it's quite clearly not. 

Kelly: Polarization and misinformation are incredibly important areas of research to approach systematically given that we seem to be doing a very poor job of having healthy debates with one another. If we don't understand the role of bias and logical fallacies, then how can we have a good and reasoned debate? One of the challenges is if we launch into doing ad hominem attacks, that doesn't help advance anything. Taking a calm approach of saying, How does that work? Maybe I'm missing something, can you explain how that works?would help us allow much more civilized discourse. Maybe there are elements of truth in another perspective that neither would have seen because the debate was staying at that polarized level as opposed to finding where some of the underlying intention is actually aligned, or just helping the person realize, Oh, wait, here's where I'm wrong. I don't actually understand how this particular mechanism works. I'm making assumptions, or somehow I can update my beliefs,’ but everybody gets to keep the label of being a critical thinker, and wanting to help others, and sharing information. We're all educators, and teachers, and noble in our intent, but now we can work our way through improving the quality of our discussions.   

David: Totally. An important element in that vein is that, over the last few years, I've started really trying to understand polarization and how it is that people can have really dramatically different beliefs, not just about what to do, but about basic facts. I think that a lot of the standard narrative is around cognitive biases, and motivated reasoning, and things like that, of people like tricking themselves so that they can protect their identities, and so on. I am much less convinced of that than I was at the beginning. I really think that, in general, people are trying to do good, accurate reasoning, trying to have accurate beliefs.   

The issue is that they're starting with different information inputs. If you have one person that only watches Fox, and you have another person that only watches MSNBC, and then they take that data that they’re receiving as an input, then in a completely rational, not at all biased way, form beliefs and update those beliefs, they're going to come up with completely different understandings of the world. That can also help you understand when you are having a debate with someone from the other side. If you understand it, not as that person is stupid because they believe that crazy thing, or that person is willfully ignorant that they could believe such a thing, but instead, imagine if I was getting the input stream that they were getting, I would see the world very differently.  

Years ago, I signed up for the News Google alert emails from the New York Times and Fox, and it has been extremely informative to see the difference in both what they're covering and how they cover the things that they both cover. If you are only reading one of these and not the other, you would come out at a totally different place, even if you were absolutely doing your best, with no bias, and so on. That is not something that you can lay at the feet of social media. There's a lot of consolidation in the media market, both on TV and talk radio, where people only listen to or watch the stuff from their side, and that causes real issues.