Russian Meddling Goes on in Higher Education
On February 16, the Justice Department issued a detailed indictment of 13 Russians and three Russian companies that worked since 2014 in subverting the 2016 U.S. elections. According to the indictment filed by the office of special counsel Robert S. Muller III, these foreign agents developed a sophisticated network aimed at supporting the Trump campaign, especially in battleground states.
These agents, who worked from an office in St. Petersburg, Russia, stole the identities of American citizens, posed as political activists, and stirred debate on politically sensitive issues such as immigration, religion, and race, all in an attempt to favor the Trump campaign.
Further, the Russians were in contact with “unwitting individuals associated with the Trump campaign,” according to court documents. In a more general framework, these Russians conspirators wanted “to promote discord in the United States and undermine public confidence in democracy,” said Rod J. Rosenstein, the deputy attorney general overseeing the inquiry.
Although this indictment is not surprising since for years the 17 U.S. intelligence agencies have been saying that the Russians have been meddling in our political system, this is an official confirmation that such is the case and not just “fake news.” As also reported by intelligence agencies in other countries, particularly those in Europe, the Russians have carried out similar actions elsewhere in order to serve the global political ambitions of their country by creating instability that weakens democratic political institutions.
But what may surprise many is that these types of Russian actions are not just targeted at democracy as a political system; they have also gone after our universities.
In an article recently published in the journal, Strategic Studies Quarterly, its author, Lt. Col. Jarred Prier, of the United States Air Force, reveals that the Russian Internet bots had another target in the fall of 2015: The University of Missouri.
Internet bots, or simply bots (short for robots), are software programs that run automated tasks that repeat themselves through what is called web “spidering” or “crawling” in which a script makes, analyzes and files information from web servers at high speed. This allows for information to be spread very rapidly throughout social media and Internet sites, giving the impression that many people are engaged in the “conversation” and, thus, adding to the false belief that the information in question is true.
As sinister as it sounds, today more than half of all web traffic is made up of bots for purely commercial purposes or by people who pay companies to increase their Internet profile, so all Internet has been exposed to them.
In the case of the University of Missouri (“Mizzou”), a lot of the racial anger and confusion surrounding incidents on campus in 2015, was manufactured, at least in part, by a disinformation campaign from Russia. According to Prier’s article, the bots created false impressions about some threats against black students and faculty members at the university, which resulted in some campus leaders calling for people to stay home and many students to say that they were terrified. The false reports also contributed to a negative image of the university regarding its support for minority students. This is an image that the university in trying to counter to this day.
What made the Russian meddling so slick in this case was that the Russian bots avoided detection, in part, because the hashtag #PrayforMizzou was used by real people who were at the university or were concerned about it, as well as by those forwarding the bot-created tweets.
According to the journal article, one of the false news items that was quickly picked up and circulated came from Russian bots. The tweets from this bot said that local police officers were marching with Ku Klux Klan members on or near campus, representing, thus, a serious threat to black students and faculty. The tweet said that the supposed author’s younger brother had been beaten up by police officers, and the tweet was accompanied by a photo of a black child who looked as if he had just been beaten up. The picture was actually of an incident that had occurred a year earlier in Ohio. People immediately started retweeting the story, using the #PrayforMizzou hashtag.
If at the time of the incidents you would have done a Google search for “bruised black child,” one of the first images that would pop up would had been the Ohio picture, as well as the false story about Mizzou. Needless to say, this false information inflamed campus racial tensions even further. Black students, supported by the football team, protested, and the University of Missouri System’s president – the target of some of the protests – resigned.
Since Mizzou is not alone among the campuses with racial incidents that extremist rightwing groups have targeted, we can expect more of these cyberattacks to occur in the years to come.
You may ask that since this is a human technology, why can’t we counter these bots with technology? It is not that easy. “You can design a strategy of dispersing bots that evade the algorithms Twitter designed to catch bot tweeting, mainly because the Mizzou hashtag was being used outside of that attack. The narrative was set as the trend was hijacked, and the hoax was underway,” wrote Prier in his article.
The question now is, how can U.S. colleges and universities prepare not to be hacked by ill-intended bots? First, these institutions need to be more proactive in their information strategies, not just reactive, because by the time a misinformation crisis occurs, it may be too late to respond. Second, the top officials at these institutions need to come forward quickly, openly, and with authenticity to present the facts.
We should never underestimate the public appetite for bad, shocking news, as well as how vulnerable humans are to false news. After all, it is far easier to believe than to know.
PDF Version:
Meddling Goes on in Higher Education