Disappointed, but not Surprised
Twitter has failed to provide researchers with the tools to stop disinformation, while simultaneously profiting off of it
Pam is 42. She is a white, stay-at-home mom who lives in Bloomington, IL. Pam has two kids, a nice house, a yellow lab named Jackson, and a Twitter account. She seems like a nice lady. Her dad was in Vietnam and she is a proud American who has voted in every federal and local election since she turned eighteen. Lately though, Pam has become concerned about her country. She is concerned about China, the economy, and immigration. Oh and she is concerned about the people in her community. She is concerned that they don’t seem to care about all the things on the news. About what the Democrats are getting up to in basements… You agree with most of her concerns. After all, it’s good to ask questions. You think Pam seems like a nice lady.
But here is the thing; Pam doesn’t vote. There are no kids. No house. No lab named Jackson. Pam doesn’t exist — and in that way, “Pam” is not the only one. In fact, there are many accounts just like Pam. Slowly, but surely, adding a retweet here and a reply there that looks legit and is easy to share but, upon closer inspection, is designed to make you afraid, distrustful, and angry.
Pam is misinformation dressed in the trappings of humanity. An inauthentic account built to sow distrust in people, institutions, and the very idea of democracy. Twitter could put a stop to all of the Pams. Twitter could share information on more of the accounts they suspend and let us help stop the millions of Pams across the web. But they haven’t and they won’t.
Twitter Can Solve This, but They Won’t
We all know that in real life, we don’t communicate like we do on Twitter. In real life, we get our news and opinions from friends, family, coworkers. People with a wide array of beliefs and ideals. It is messy, often inaccurate, and importantly — uncurated. In real life, unlike on Twitter, we are not subject to the underlying system that allows Twitter to exist. A system that tracks everything you see and enjoy and categorizes you into neat buckets that allow advertisers to show you exactly what you want, exactly when you want it. It’s a system that feeds on your data, suggesting you more content that it thinks you will like — because the longer the system has your attention, the more ads you see, and the more money the system makes. That’s why, when you like a post about a new recipe you start seeing pictures of baked goods. Recipes and baking are in the same bucket. It is also why, if you are interested in UFOs you might start seeing posts about flat earth, or Bigfoot, or QAnon.
Following the January 6 insurrection at the United States Capitol, Twitter banned over 70,000 QAnon accounts in an attempt to combat the misinformation being spread by accounts like Pam’s. That was a follow-up to the 7,000 accounts that Twitter banned over the summer for related reasons.
With regards to those 70,000 accounts, the big question here is, what took them so long?
Using Twitter’s API in December 2020, Social Forensics looked to find data on QAnon related accounts. Using fairly simple parameters (i.e. presence of “QAnon” or the QAnon motto, “WWG1WGA,” in account display names and bio descriptions), we found 45,000 QAnon related accounts. After the latest wave of bans, 43,000 accounts (95%+) we flagged in our data were suspended.
Of particular concern in this instance, is that 92% of those accounts were created prior to the small wave of suspensions in July 2020. If these accounts have been spreading misinformation for months and even years clearly violating Twitter’s Terms of Service, why were they on the platform? Why weren’t they caught and suspended in the summer of 2020 when Twitter banned 7,000 accounts?
Would these accounts have continued to exist if their latest rash of conspiracies had not played such a large and public role in the 2020 election and the insurrection at the capital?
Ultimately, it comes down to an unfortunate truth about Twitter and many other social media companies. Platforms that make money on advertising profit off of inauthentic accounts. As many have said, if you aren’t paying for something, you are the product and this could not be more true than in Twitter’s case. The more inauthentic accounts on Twitter, the more users Twitter can claim they advertise to and the better their metrics look. This wouldn’t be an issue if the inauthentic accounts were not spreading false information. It also would not be a problem if, as posited above, Twitter was not taken as seriously as it is. For millions of people, Twitter is used as a newsstand and coffee shop. A place where you read headlines and then discuss what people in your circle think about those headlines. The problem is compounded when traditional media uses Twitter as a primary source. In people’s minds, it validates Twitter as a place where you can find reliable information when, in reality, the vast, vast majority of real people on Twitter are not reliable sources and there are many, many accounts which exist solely to spread and amplify misinformation.
The allowance of misinformation and inauthentic accounts has real consequences beyond padding Twitter’s bottom line. These consequences mean that it is the company’s responsibility to moderate the content shared on their site. It is their responsibility to deplatform the “Pams”. Twitter does actively moderate their site. They remove copyrighted material for instance. They have the ability to remove unwanted content. They could set up systems which closely monitor and flag or delete inaccurate information (whether accidental or purposeful), but they won’t because doing so would create a cost center that, from a shareholders perspective, would exist solely to actively lower revenue.
Researchers Could Help, but Twitter Won’t Let Us
There are other options beyond trusting Twitter to be the arbiter of truth and investigator of malfeasance. Those of us who work to understand, research, and fight misinformation could help, but in the case of the 70,000 deleted accounts, Twitter is actively inhibiting our ability to do so.
In the past, Twitter has shared information on deleted misinformation accounts. However, Twitter only releases data on accounts that they believe to be tied to “state-sponsored actors.” That is, accounts tied to a state-funded disinformation campaign. All well and good, except they did not release the QAnon account data. So does Twitter believe that none of these accounts were connected to state-sponsored actors? It does not seem out of the question that foreign entities played a role in creating and propagating some of these accounts and regardless of whether this activity was singularly or partially state-sponsored — or even influenced — a movement of this magnitude and potential for violence should be researched. Twitter’s “state-sponsorship” rule fails to account for the fact that it isn’t just the Russian or the Chinese governments who have a vested interest in misinformation and that conspiracies have a sad tendency to go viral. If we only have access to data from state-sponsored accounts, we cannot combat misinformation that has the potential to infiltrate the gullible among us.
Of course, we are not implying that Twitter should have left those accounts up. Deleting them was a step in the right direction. However, like so many of the steps Twitter has taken, it fails to address the root of the problem. To truly combat misinformation, Twitter needs to release the archived account data so that researchers outside of their platform can use it to develop more sophisticated approaches to combating the sources of misinformation. Twitter needs to stop treating the symptoms and start curing the disease.
To Wrap Up
Banning 70,000 accounts was not the end of misinformation on Twitter. It won’t even be the end of QAnon on Twitter. In order for us to make a significant impact on the spread of these hateful and dangerous conspiracies, we need access to the data on the deleted accounts. We need to understand the methods misinformation accounts use. Whether they are sponsored by the state, an organization with a vested interest in distributing conspiracies, or are lone trolls — not allowing access to study their methods is failure to protect users from misinformation.
This is just the latest in a long history of Twitter’s failure to combat this problem. If they do not make deleted account data public they are knowingly allowing the actors behind these accounts to recycle the tactics that have led to the information environment we currently find ourselves in. By failing to release this data, Twitter is tacitly acknowledging that our concerns about the role that social media is playing in our public discord are correct. Twitter doesn’t care about the truth, or protecting users from dangerous conspiracies. Twitter only cares about growth and profit which is disappointing, but not surprising. But the real question for all of us, is how much can we afford to tolerate and wait? Because whether it’s conspiracy driven violence, racial inequity, or climate catastrophe, we are all on the clock.
Social Forensics maps and monitors social connections and activity.
We create purposefully designed tools to manage social data analytics needs across various industries. Our focus is audience segmentation and identifying coordinated inauthentic behavior (CIB) across social media platforms.
Geoff Golberg is an NYC-based researcher (and entrepreneur) who is fascinated by graph visualization/network analysis — more specifically, when applied to social networks and blockchain activity. His experience spans structured finance, ad tech, and digital marketing/customer acquisition, both at startups and public companies.
Geoff is the Founder/CEO/Janitor of Social Forensics.