Social networks have struggled to figure out how to handle issues like threats of violence and the presence of hate groups on their platforms. But a new study suggests that attempts to limit the latter run up against a serious problem: the networks formed by hate group members are remarkably resilient, and they will migrate from network to network, keeping and sometimes expanding their connections in the process. The study does offer a few suggestions for how to limit the impact of these groups, but many of the suggestions will require the intervention of actual humans, rather than the algorithms most social networks favor.
Finding the “hate highways”
The work, done by researchers at George Washington and Miami Universities, focused on networks of racist groups, centered on the US’ KKK. To do this, the researchers tracked the presence of racist groups on two major social networks: Facebook and a Russia-based network called VKontakte. The researchers crafted an automated system that could identify interest groups that shared links with each other. It would chart these connections iteratively, continuing until the process simply re-identified previously known groups. The system tracked links to other social sites like Instagram, but it doesn’t iterate within those sites.
The authors confirmed this worked by performing a similar analysis manually. Satisfied, the team then tracked daily changes for an extended period of 2018. Through this, they identified more than 768 nodes formed by members of the white supremacy movement. Other nodes were identified, but these tended to be things like pornography or illicit materials, so they were ignored for this study.
The groups identified this way varied in size considerably, with some networks showing a power-law distribution in size. The authors say that this indicates the clusters were self-organizing, since it would be difficult to engineer this pattern.
The networks were also geographically diverse. While VKontakte is primarily used in Russia and Eastern Europe, US-based white supremacists ended up using the service as well, and there were multiple cross-platform links and subgroups with presences on both networks. Networks on Facebook extend from the US into Western Europe, but they also have outposts in South Africa and the Philippines. This led to some bizarre cross-cultural links: “neo-Nazi clusters with membership drawn from the United Kingdom, Canada, United States, Australia, and New Zealand feature material about English football, Brexit, and skinhead imagery while also promoting black music genres.”
The period that the authors tracked included some major events that altered the white supremacist networks. Most prominent among these is Facebook banning the KKK. That led to a wholesale migration of US-based KKK groups to VKontakte; in many cases, these were simply mirrors of the sites the groups had set up on Facebook. But things became complicated on VKontakte, as well, as Ukraine chose to ban the entire network in that country.
At that point, some of the original Facebook groups surreptitiously made their way back to the new platform, but they did so with some new skills. Hoping to avoid Facebook’s algorithms, the re-formed KKK groups often hid their identity by using Cyrillic characters.
Another notable event that reshaped the networks was the Parkland school shooting, after which it was discovered that the shooter had an interest in the KKK and its symbols. In the wake of the shooting, many of the small clusters of KKK supporters started forming links with larger, more established hate groups. “This adaptive evolutionary response helps the decentralized KKK ideological organism to protect itself by bringing together previously unconnected supporters,” the authors argue. They also note that a similar growth in clustering took place among ISIS supporters in the wake of the news that its leader had been injured in combat.
A possible plan?
Given the behavior seen here and in the previous study of ISIS groups, the authors built a model of the formation of connections among hate groups. They used this to try out a few different policies in order to see how they might reduce the robust networks formed online. The result is a series of suggestions for any platform that decides to get serious about tackling hate groups that use its service.
To begin with, they argue that the first thing to do is focus on banning the small clusters of hate group members that form. This is easier, since there are far more of them, and it’s these individual clusters that help provide the resiliency that dilutes the impact of large-scale bans. In association with this, the platform should randomly ban some of the members of these groups. This both undercuts the hate groups’ resiliency, and, because the total number of bans is relatively small and randomly distributed, it reduces the chance of any backlash.
Their last suggestion is that platforms encourage groups that are actively opposing the hate groups. Part of the reason that people form these insular groups is because their opinions aren’t welcome in the wider society; groups on social networks allow them to express unpopular opinions without fear of opposition or sanction. By raising the number and prominence of groups opposed to them, a platform can reduce the comfort level of those prone to white supremacy and other forms of hatred.
Nature, 2019. DOI: 10.1038/s41586-019-1494-7 (About DOIs).