Detecting Twitter Bots: a Test of Botometer with Bots of Different Complexity

In which way does different behavior of bots influence their detection by the popular Twitter bot detection framework Botometer? This question was addressed in a Master‘s thesis written at Bielefeld University by Merle Reimann.

Botometer gives the analyzed accounts a score between 0 and 1 (on the website 0-5) where 0 stands for a human and 1 for a bot account. In the beginning of September, Botometer was updated. The update introduced a new model, to improve the bot detection. Botometer now computes the score based on the probability that an account belongs to a certain type of bot class (Astroturf, Fake follower, Financial, Self declared, Spammer).

One of the experiments conducted for the thesis started when the new Botometer version was introduced. Four bots which tweeted based on templates and showed slightly different behavior were used. The first and the third bot tweeted every 12 hours, but the latter translated its tweets from English to German to French and back to English to change the content a bit. The second and fourth bot tweeted in irregular time intervals between 9 am and 8 pm, and again the latter bot translated.

Over the course of the experiment the second and third bot gained additional abilities. After following a number of users and unfollowing everyone who did not follow back, the bots started to retweet. Six days later, the two bots began to like tweets of other accounts. Both behaviors, retweeting and liking, improved the scores a lot compared to the bots which did not change their behavior, meaning Botometer indicated that the bots with the additional abilities were far less likely to be bots than the others.

It seems that Botometer is good at detecting simple bots, but has some problems with the detection of bots which show more complex behavior and do not belong to one of the bot classes used by Botometer.

New Experiment Online – Participants Wanted

We are currently conducting our next experiment to investigate how people perceive and rate Twitter accounts and are still looking for test participants. If you are interested (and would like to take the opportunity to win a 25€ voucher), please follow this link: Participation takes around 10-15 minutes (please note that German language skills are required).

Foreign Interference through Social Media – Parliamentary submission

On 5 December 2019, the Australian Senate resolved to establish a Select Committee on Foreign Interference through Social Media to inquire into and report on the risk posed to Australia’s democracy by foreign interference through social media.

A team of researchers from the VOSON Lab (School of Sociology, Australian National University) and the News and Media Research Centre (University of Canberra), including Robert Ackland and Mathieu O’Neil, submitted an approach to study interference through social media, which involves the use of computational methods (network and text analysis) and data visualisation techniques. The study aimed at identifying styles of IRA troll account activity in the Australian political Twittersphere, utilising a large-scale Twitter dataset collected over a year (September 2015 to October 2016).

The results of these analyses demonstrate that IRA trolling operation did not focus on persuasion and efforts to directly shift political views, nor did they generally seek to change the shape of online discussion. Rather, they tended to focus on a strategy of ‘resonance’ where they seek to embed themselves in a community and from there can work to activate at least certain sections of it for strategic aims.

The submission is available HERE

PD Dr Florian Muhle visits the VOSON Lab

PD Dr Florian Muhle (Faculty of Sociology, Bielefeld University, Germany) visited the VOSON Lab (School of Sociology, Australian National University) to establish the “ANU Social Dynamics of Attention Centre”.

During his visit, PD Dr Muhle gave the seminar presentation: “Types and forms of automation on Twitter” at the ANU School of Sociology Seminar Series. In addition, he presented about “Twitter in the news: Some problems of using Twitter data to represent public opinion” at the research workshop ”Social and behavioral dynamics of attention”, and participated in the RSSS Data Sprint 2020.

Florian’s collaboration with the VOSON Lab started in 2017, with the Germany Joint Research Co-operation Scheme 2017-18 (Universities Australia – DAAD) “Socialbots as political actors? Autonomous communication technologies as tools and objects for digital sociology” and continues since 2019 under the project “Unbiased Bots that Build Bridges”, funded by the Volkswagen Foundation.

RSSS Data Sprint 2020 at ANU: tools, methods, data and ideas

Professor Robert Ackland (ANU) and PD Dr Florian Muhle (Bielefeld University), together with Australian colleagues held a data sprint event on ‘Incivility on Social Media’ at the Australian National University in February 2020, with the goal of developing a theoretically informed approach for the detection and analysis of uncivil behaviour on social media.

The four-day event consisted of two days of masterclasses and two days of data sprint. Professor Robert Ackland opening masterclass involved an introduction to data collection and network visualisation using the VOSON Lab open source tools: vosonSML and VOSON Dashboard as well as the Statnet package for statistical network analysis (ERGM); Professor Ken Benoit presented an introduction to text analysis and the quanteda package; Dr Ignacio Ojea introduced to the agent modelling software: NetLogo, software developed by the Northwestern’s Center for Connected Learning and Computer-Based Modeling. Masterclasses closed with an introduction to machine learning and natural language processing using Python.

During the data sprint, the group intensively explored incivility in two Reddit groups, including data collection, quantitative and qualitative data analysis, and data visualisation. The session wrapped up with preliminary findings, which are the steppingstones for a research paper.

Twitterbots: Sample chapter on bot personalities

Tony Veale (University College Dublin), shares a sample chapter from his book “Twitterbots Making Machines that Make Meaning” with us today about bot personalities on Twitter (written together with Mike Cook, available through MIT Press).

In the chapter, he argues that word choice on social media is revealing in terms of personality. Sentiment tools that quantify an author’s personality can thus not only be used to quantify human users but also bots to reveal different “bot personalities” and measure similarities to other accounts.

First experimental results about bot identification

We just got our first results from an experiment (n=65) dealing with the question whether accounts that are labeled as bots are also identified as bots by human users and what effects this has on how human users perceive these accounts. In this experiment, we were concerned with the perception of bots in general and not with social bots (social bots as they are typically defined in the literature hide their true identity and would not identify themselves as bots).

In addition to the control group (no declared bot account) we used two different labels for accounts to enable users to identify them as bots (“RainerBot” in one condition and “(!)Bot-Account Rainer Schmitt-Sasse” in the other). We showed participants a section of an online political discussion the account contributed to (the discussion had a total of 9 participants) . Participants of the experiment were then asked a set of questions, including whether they thought that any of the accounts contributing to the discussion was a bot.
T-tests were calculated to examine if participants were able to identify the account as a bot or not. Interestingly we found that “RainerBot” and “(!)Bot-Account Rainer Schmitt-Sasse” did not get significantly identified as bot or non-bot, p > .208, while participants in the control group mostly identified the account as a non-bot-account, p < .002.
These results show that a name such as “RainerBot” is an indication to some users that they are dealing with a bot but no clear indication to the majority of users that they are dealing with a bot. This suggests that users by default identify accounts as human and that additional means are necessary for users to be able to identify bots in online discussions.

Publication about automated political communication on Twitter

An article by Florian Muhle, Robert Ackland and Timothy Graham on the popularity and influence of automated accounts in online conversations during the US presidential election campaign 2016 has been published, based on a presentation at the 39th Congress of the German Sociological Association. It can be downloaded here:

U3B presents web app at the Uni.Stadt.Fest

Last Sunday, we presented our interactive installation “Bot or Not” at the festivities for the 50th anniversary of Bielefeld University. Passersby could talk with us, learn about our project, and try out a web app that we installed on four touchscreens. The web app showed Tweets from Twitter accounts that are suspected to be social bots. Users could scroll through the Tweets, gain some additional information, and than decide whether they think they are seeing the Tweets of a bot or a human.

Our presentation was a great success, many passersby were interested in our project, at times people were even waiting in line to try out our web app. Although users of our web app were faced with a binary choice (bot or not), we actually wanted to show people that it can be difficult to identify automated accounts on Twitter. To that effect, we had interesting conversations that showed us that some forms of automation are easy to identify for people, for example if they result in repetitive patterns of posts with similar content, while other forms are more difficult to identify.

Two visitors at our booth testing the web app

See also the news of the Faculty of Sociology (in German):

About the entire event:

Using chatbots in consulting

Yesterday our research assistant Philipp Waag (Bielefeld University of Applied Sciences) ( talked about chatbots and their possible applications in consulting at a workshop organized by Graduierteninstitut NRW at the University of Applied Sciences in Bielefeld („Digital Transformation in Health, Care and Social Work“). Based on sociological research by members of our project (Elena Esposito, Florian Muhle) Philipp talked about chatbot’s capabilities and flaws in consultatory interactions and interactions clients have with consulting organisazations prior to the actual consultation. Although – compared to humans – conversational agents underlie communicative restrictions, their benefits (such as being always friendly, reachable and data-feeded) certainly can be seen as a beneficial tool for consulting and presumably for other forms of interventions as well.