![]() 9 Even a single automated account can have an outsized impact, as the 500 most active bots shared 22% of the total links on Twitter. The true number of automated accounts on Twitter is unknown, but a recent Pew report suggests that up to two thirds of all activity on Twitter is from automated accounts. Some bot-like behaviors may be consistent across platforms, but more research is necessary to identify characteristic behaviors by platform. 11 While it is nearly impossible to distinguish automated accounts with certainty, there are bot-like behaviors that indicate an account could be automated.īots exist on all social media platforms however, our research and most existing scholarship is focused on Twitter bots. A social bot “automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior.” 10 (p96) Further blurring the line between human and machine, there are “cyborg” accounts that alternate between human control and automation. Some bots try to imitate humans by mimicking their online behaviors. Pew defines bots broadly as “accounts that can post content or interact with other users in an automated way and without direct human input.” 9 Bots, by definition, are not human. The term “bot” can connote different things depending on context. This commentary evolved from our mixed-methods research studying vaccination discourse on Twitter and a tutorial presented at the Social Computing, Behavioral–Cultural Modeling, and Prediction and Behavioral Representation in Modeling and Simulation conference in July 2018. In this commentary, we highlight the diversity of malicious actors on Twitter, describe some characteristic behaviors, and introduce ways to recognize social bots and other malicious actors in context. More fundamentally, the practice of public health depends on clear communication between practitioners and the communities they serve, and the interference of malicious actors could erode public confidence in online communication. 6,7 The volume of bot-produced posts can also distort efforts to use social media data to gauge public sentiment, potentially limiting the usefulness of novel surveillance efforts. Bots can directly influence users by spreading content that works against public health goals, such as antivaccine propaganda or spam for products such as e-cigarettes. 1–5 The threat of social bots to public health is multifaceted. Although bots and trolls have been widely covered in the media for their role in politics, only in the past year has their activity garnered significant attention in public health. In their editorial, Allem and Ferrara pose the question, “Could social bots pose a threat to public health?” 1 Their answer is a resounding “Yes.” Social bots and other malicious actors are a disruptive force on online social networks. Bots are now part of the social media landscape, and although it may not be possible to stop their influence, it is vital that public health researchers and practitioners recognize the potential harms and develop strategies to address bot- and troll-driven messages. The diversity of malicious actors and their multifarious goals adds complexity to research efforts that use Twitter. ![]() We utilize examples from our own research on vaccination to illustrate. It also addresses the unique threat of state-sponsored trolls. It covers both automated accounts (including traditional spambots, social spambots, content polluters, and fake followers) and human users (primarily trolls). This guide provides an overview of the types of malicious actors currently active on Twitter by highlighting the characteristic behaviors and strategies employed. It is increasingly clear that some of their activities can have a negative impact on public health. Social bots and other malicious actors have a significant presence on Twitter. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |