© 2024 NPR Illinois
The Capital's Community & News Service
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Troll Watch: How Tech Is Cracking Down On Election Disinformation

MICHEL MARTIN, HOST:

On the same day this week that YouTube said it would remove more, quote, "conspiracy theory content used to justify real-world violence," unquote, President Trump refused to disavow one of the groups pushing that material online. Here's the president speaking during the NBC town hall moderated by Savannah Guthrie.

(SOUNDBITE OF ARCHIVED RECORDING)

SAVANNAH GUTHRIE: Now, can you just once and for all state that that is completely not true...

PRESIDENT DONALD TRUMP: So I know - yeah.

GUTHRIE: ...And disavow QAnon in its entirety?

TRUMP: I know nothing about QAnon.

GUTHRIE: I just told you.

TRUMP: I know very little. You told me. But what you tell me doesn't necessarily make it fact. I hate to say that. I know nothing about it. I do know they are very much against pedophilia.

MARTIN: YouTube's moves to limit the spread of QAnon conspiracy theories is just one of the efforts by tech companies to crackdown ahead of Election Day. Facebook, for instance, rolled out a ban on messages that deny the Holocaust and ads that discourage vaccinations. So are these policies enough? And what kind of disinformation is being pushed ahead of Election Day?

We've called Camille Francois for this. She is a former executive at Google. And she is the chief innovation officer at Graphika, which analyzes data and networks. And she's with us now from New York. Camille Francois, thank you so much for joining us.

CAMILLE FRANCOIS: Thank you for having me.

MARTIN: Would you just start by telling us some of the current threads of misinformation currently being pushed online? What's your organization been tracking?

FRANCOIS: We tend to put a lot of different concepts under the broad umbrella of misinformation and disinformation. And in reality, there are a lot of these pieces that don't belong together. So when your uncle shares some bad information on Facebook, that really has nothing to do with, for instance, the sophisticated Russian troll farm campaign that seeks to attack the integrity of the election. We continue to see a lot of foreign threats. Russia continue to be active in the space, along with China and Iran. And there are a lot, of course, of domestic threats. We've talked about conspiratorial theories. QAnon is a big one. There are also concerns with militarized social movements, movements like the boogaloo, for instance. And, finally, there's sort of a wealth of bad and unreliable information that people can share without any bad intent. And this is what we call misinformation.

MARTIN: So let's take those, separately. I mean, I take your point. I know that a lot of news organizations and different nonprofit groups and authors have been trying to address the - what I would call the benign misinformation that you just described, you know? Your uncle passing things along just because that's what he's interested in, doesn't realize that it's not true. There have been a lot of groups trying to address that in recent years, trying to get people to practice good information hygiene, you know, as it were. Are those efforts making any difference at all from what you can see?

FRANCOIS: I think those efforts are helping. Labeling the source of the information is helping and helping people verify where the information they're sharing are coming from. I'll give you specific examples. Recently, we've seen a lot of viral photos and viral videos that were pretending to be from, you know, New York last week or Chicago 10 minutes ago. In reality, people who are sharing this believe it to be the case. But those videos were quickly proven to actually be totally decontextualized. One of them was from Paris in 2016. Another one was actually from I think Chicago but, you know, years ago. And so helping people understand where is the information coming from also help them not share things that are decontextualized.

MARTIN: So what about the other end of the spectrum, what you talked about, these organized troll farm efforts mainly by, you know, hostile foreign governments? Is what you're seeing now - how is that - how does that compare to 2016?

FRANCOIS: Hey, that's a great question. Let me be straightforward. In 2016, neither Silicon Valley nor Washington was prepared to tackle foreign interference. The large Russian efforts to manipulate public conversations were quite successful. And if you think about it, none of the major platforms had any rules to prevent this type of activity. And they had no teams focused on detecting this type of activity. We've gone a long way since then. In 2020, we have - all major platforms have now created very specific rules against these types of hostile troll farm activity. So we're in a much better place because there's a professional field of people detecting this type of activity. And everybody has agreed that this was unwanted and has created rules around it.

MARTIN: Well, as we mentioned, Facebook announced just last week that it would be banning conspiracy-based movements or ideas being pushed by conspiracy-based movements, like QAnon. And YouTube announced similar plans to address that group and others like it just a couple of days ago. But I take your point. But it's less than three weeks until Election Day. Does it strike you as kind of a little bit late?

FRANCOIS: I think you're right that the platforms have been much slower to address domestic disinformation and to address some of design considerations that help disinformation go viral than what they have with foreign actors. And so Facebook, for instance, for the first time, took action on QAnon under the idea that the conspiracy theory was actually violence-generating. And it did that in two waves. On the first wave of action, they said, let's remove all the pages that are related to QAnon and have a violent content. And on the second wave, they said, actually, let's just remove all pages and groups and Instagram accounts that are representing QAnon, even if they contain no violent content, right? So they came around to the idea that the very idea of the QAnon conspiracy theory was not wanted on Facebook and was a violent-inducing conspiracy theory.

We also saw Twitter do something quite interesting. They created a temporary election season set of changes. They're making it harder for people to retweet quickly, and they're turning off some of the suggestions just for the election season, recognizing that some of these features can accelerate the spread of harmful mis- and disinformation throughout the election.

MARTIN: Do you feel that - on the whole that this country is safer from the kinds of distorting effects that we saw in 2016, especially the intentionally destructive effects than we were four years ago?

FRANCOIS: Yeah. I think that the technology industry's in a better place to make us safer. But I think that some of the threats are not technology threats. And I think that when you see elected officials sort of actively participating in disseminating harmful and false information, then you don't really have a Twitter, Facebook problem; you have a political problem. And so I think that overall, there is a lot that has been done by the technology industry and by the platforms to get better at tackling mis- and disinformation. There's still more that needs to be done. But, unfortunately, not all of this problem is a technology problem. We also have a really important political problem around spreading disinformation.

MARTIN: That was Camille Francois. She is a data researcher and the chief innovation officer at Graphika. Ms. Francois, thank you so much for joining us today and sharing your expertise.

FRANCOIS: Thank you for having me. Transcript provided by NPR, Copyright NPR.