Social bots are indistinguishable from real users on Twitter

Social bots, automated software programs that simulate human behavior patterns in social networks and sometimes appear as "fake accounts," are indistinguishable from humans in direct communication with users. This is shown by a recent study by researchers Ema Kušen and Mark Strembeck at the Vienna University of Economics and Business.

chatbot

Political vote-rigging and disinformation on the web are not uncommon today - especially before elections or other political events. Social bots are software programs that simulate human presence on the web. A recent study by Ema Kušen and Mark Strembeck from the Institute for Information Systems and New Media at WU shows how well they do this. The two researchers used 4.4 million tweets to investigate how social bots influence the mood on the web and how they change their behavior when communicating directly with other users.

Well camouflaged

The results show that human users generally tend to follow the basic mood of a discussion in their tweets, while social bots try to reverse the mood by using opposite emotions. (Example: for the positive event Thanksgiving Day a social bot tweets: "Sissy Mitt Romney signed Massachusetts gun ban #thanksgiving #Trump #MAGA").

In the case of controversial events (such as elections), social bots send out emotionally polarizing messages in particular and thus try to influence public opinion. (In the context of the US presidential election, for example: "ObamaFail I'll be so happy to see this joke move out of the White House!!! #VoteTrumpPence16")

It became clear that social bots even intersperse polarizing messages in thematically unrelated discussions for this purpose. In direct communication with human users, however, this behavior changes: If social bots address their messages directly to Twitter users (with the address @TwitterUser), they also adapt to the general mood. Strembeck explains, "Social bots are no longer distinguishable from humans in direct communication based on the emotions they send."

Recognize patterns

In their analysis, Strembeck and Kusen were also able to identify a number of statistically significant patterns, so-called "emotion exchange motifs," that are typical for direct communication with human users. The new patterns for eight different emotions represent an important step in the research. "On the occasion of the current EU election, for example, there were warnings that various interest groups might try to influence the election in social networks. Among other things, the results can help to identify social bots more reliably in the future. In various follow-up studies, it will now be necessary to clarify why the behavior of social bots when sending broadcast messages differs from the behavior during direct communication," says study author Strembeck.

 

To the full study: Sciencedirect.com

More articles on the topic