In short: Approximately two-thirds of Americans are aware of the use of 'bots' on social media and the majority of those believe that they are being used with malicious intent, according to a recent Pew Research study involving 4,581 adults in the US. With that said, the most important takeaway from the study may be that social media bots appear to be having a profoundly negative impact on user confidence. In total 66-percent of those surveyed had been exposed to the concept compared to around 34-percent of adults in the region who claim to have never heard of bots. Meanwhile, as many as 80-percent of those who have heard of bot accounts believe they are being used for 'bad' purposes while only 17-percent of those believe they are being used for 'good' purposes. While over 81-percent of respondents who have heard of the bots believe that a 'fair' number of users receive their news from them, only 7-percent are very confident in their own ability to spot one. Just 40-percent are even somewhat confident in their own abilities and 66-percent firmly believe that the bots are having a predominantly negative impact on how informed the average American is.
Background: 'Fake news' and bias have become hot-button topics in the news over the past several years, spurring controversy and debate as well as drastic changes in the way a lot of tech companies operate. The perception that bots are mostly a negative influence and that they typically don't spread accurate or unbiased information hasn't changed but the fervor with which they are met has grown considerably. That growth has been rampant since news began to surface about the role of 'bot' accounts and other fake accounts on the 2016 US Presidential elections. However, in the time since Pew Research first looked into social media users' perceptions about automated social media accounts, the confidence users express in their ability to distinguish real from fake has dropped significantly. More directly, user confidence was at 84-percent in the previous poll compared to the 47-percent who are now at least partially confident they can spot a bot and, by proxy, fake news.
That's a trend that also shows through recent shifts in the management of social media sites and products. Changes over the past couple of years have placed privacy at the forefront, following several major breaches in security but a lot of focus has also been on halting the spread of fake news and shutting down fake accounts. For example, Twitter has been cracking down on those types of accounts and shut down over 70 million such accounts in the two months leading to July of this year. More recently, Facebook has implemented measures that will ensure that fake news spread from accounts is seen less frequently and accompanied by a well-researched rebuttal.
Impact: Social media's response to the spread of bots and public perception has largely been to combat the use of bots where it's appropriate to do so and the new research may be useful in that regard as well. Respondents to the study seem to indicate that at least some use of automation in the spread of information or news is acceptable. Interestingly, the most common acceptable use is actually tied in with the government. Namely, as many as 78-percent of respondents believe the government could use bots to relay information about national emergencies without causing too much trouble. On the other hand, only 55-percent and 53-percent, respectively, believe businesses should be using bots to spread the word about a new product or as a customer interface.