Publications

Supplementary Information for “Bots increase exposure to negative and inflammatory content in online social systems”

Abstract

One of the most daunting tasks in social media analysis is determining whether a user account is controlled by a human or a software (ie, a bot). A great deal of research aims to address this issue, including our own efforts [1, 2] and others’[3-6]. In this realm,​ Botometer​ 6 (formerly BotOrNot) represents, as of today, the only openly accessible solution [7]. It consists of an Application Programming Interface (API) developed in Python which allows to programmatically interact with the underlying machine learning system.​ Botometer​​ has been proven quite accurate in detecting social bots [2, 7]. However, the public interface of​ Botometer has two limitations that prevented us to use it in this project: the framework relies on the Twitter API to collect recent data about the accounts to inspect. The Twitter API imposes very strict query rate limits, therefore making it impossible to analyze more than a few thousand accounts with the public​ Botometer​ Python API. In this study, our goal is to detect bots in a very large population of over 2 million users, requiring an ad hoc large-scale bot detection solution. The second limitation is once again derived by the Twitter API: when​ Botometer inspects an account that has been either suspended, protected, quarantined, or deleted, the Twitter API does not provide any details about it, rendering​ Botometer unable to make any determination. Since this study will show that a significant portion of bot accounts involved in​ MacronLeaks has been either suspended, quarantined, or deleted shortly after Election Day (May 7, 2017),​ Botometer would not represent a suitable tool to analyze them. www. pnas. org/cgi/doi …

Date
March 11, 2026
Authors
Massimo Stella, Emilio Ferrara, Manlio De Domenico