Only 1600 bots? This sounds like the makings of a moral panic to me.
There's a lot of misunderstanding about what these bots are capable of... and I keep seeing top ranking comments on Reddit talking about these bots daily.
The media seems to be spreading the idea they are autonomous agents automatically posting content on politically useful threads, with bot voting rings pumping them up. When in reality they still rely heavily on a human-intensive process, requiring lots of manual targeting and copywriting, and Reddit/Twitter/etc are very good at detecting voting-rings, having been perfecting algorithms to detect them for over a decade.
And do people think Russia or fringe right-wing groups have some super-effective autonomous bots that weren't also available to the best paid US consultants?
If these fake Twitter accounts, usually bots only followed by other bots, were understood in terms of the hundreds of millions of real users on Twitter, the billions spent by both real parties, and within the context of the technical limitations of bots, it wouldn't seem so scary.
The real question is how many of those bots are spambots and bots attempting to pose as real people, as opposed to bots that are open about the fact that they are bots? There's plenty of top-shelf novelty accounts on Twitter that are also bots.
That's one reason why a no bot policy wouldn't work.
A bot posting articles from your favourite news site is actually useful. A bot which prints out Trump's tweets, burns them and then tweets out the video [@burnedyourtweet] is mildly amusing (and some pretty smart engineering). A bot which automatically replies to every tweet by $personality with a meme is usually an 'orrible troll, but occasionally amusing and often owned by someone who's very convinced the First Amendment applies to Twitter. A retweet bot which does a plausible impression of an angry Trumper might be indistinguishable from the real thing without investigating its network, and I don't think eliminating the real thing would be a great move for Twitter to make.
The other reason is that Twitter's review process is more or less completely random in its responses (I'm reminded of the girl that reported literally the same abusive tweet directed at her from three troll accounts, and received a different response for each one)
This is why Reddit style voting is super useful. Twitter suffers from a noise problem as a side-effect of focusing so much on newness and sorting by date (I know they've made recent changes here).
This community driven ranking seems like the best solution to low quality bots posting. A modding driven centralized editing process comes with plenty of issues, as we've seen on Reddit. And yes Bot's provide plenty of value, far outweighing some niche political accounts making the rounds.
Not to mention, a single person creating thousands of bot accounts is easy, you need those thousand other bots so your one primary bot has enough followers/likes. But in the press that will be counted as 1000s instead of 1. Not to mention getting those bots to get a meaningful audience is very challenging.
I have plenty of fake 'bot' followers, and 99.9% of their followers are other bot. It's largely a bunch of bots tweeting to other bots.
The main risk is hashtags being gamed and people buying voting rings/followers... which has been a problem on the internet forever.
The open bots have an incentive to make one bot for each usecase, one person trying to make spam bots has an incentive to create as many bots as possible. I have no numbers on the subject, but I would be shocked if it were anywhere close to fifty percent.
Frankly, I'd expect the percentage to be significantly less than 50%. Determining good from bad bots without a manual check is an interesting problem, though, and it's not cool on Twitter's part if they ban legit bots that people find useful/entertaining on top of the bad actors.
There were about 400,000 bots posting political messages during the 2016 U.S.
presidential election on Twitter, according to a research paper by Emilio
Ferrara, an assistant professor at the University of Southern California.
> He told Bloomberg that he has discovered that the same group of 1,600 bots tweeting extremist right-wing posts in the U.S. elections also posted anti-Macron sentiment during the French elections and extremist right-wing content during the German elections this year.
So of those 400k political bots, 1600 were identified as talking about fringe topics.
Not to mention it's not clear how this fits into the bigger picture? What percentage of ALL of Twitter accounts are bots? How many of those bots are active? How many actually have an audience of real people, not just other bots? For ex: 400k fake accounts, with few real followers, out of hundreds of millions of real users talking politics may not seem as significant...
Not saying it's not a 'problem', but it needs to be understood in context, especially considering the constant news coverage and outrage.
>So of those 400k political bots, 1600 were identified as talking about fringe topics.
Read it again. Of those 400k bots, 1600 were identified talking about the US elections, anti-Macron sentiment, and extremist far-right topics. All three topics. That could be one person/group reusing bots.
Additionally, hundreds of millions of real users talking politics is unlikely. Twitter has 300 milliom active users per month, 9-15% of them bots, the US election had 120 million voters. Hundreds of millions would be nearly everyone.
Nothing posted is proof of that. The 1600 number is interesting as it means one entity was trying to influence elections in three countries. It is not the number that were far-right, that number was not in this article.
As context is supposed to be a needed thing, you could have just found the research paper rather than perform guestimates math. Fifteen percent of users tweeting about the election were bots, twenty percent of tweets, and about three Trump bots for every Hilary bot.
> I read into the investigations on FB/Twitter and they only found 10k-100k ads/posts bought by Russians on each platform.
Those numbers aren't ads/posts, they're dollars. In September, Facebook confirmed that Russian operatives spent $100,000 on psyops advertising.
Keep in mind that (1) $100,000 is 15-25 million impressions reaching millions of highly-targeted people (Cambridge Analytica is now under investigation by the HPSCI), all amplified by the targets' engagement, and that (2) FB is sharing only 100% confirmed activity discovered during very early, self-investigated findings.
Keep in mind - total spent on the 2016 election according to CBS was $6.8 Billion [1].
In that context, how could $100,000 of FB ads (0.001%) possibly be relevant?
Speculating that it might possibly be relevant if we hypothetically were to discover it was 100x bigger than we currently know, to me that defeats the whole argument.
"Big if true!" is the rallying cry of the whole Russiagate fiasco. "Big if Bigger!" is just the next contortion.
It's a shame that hostile foreign governments can do this to us without a strong retaliation. We should be doing the same to their social media platforms, such as VK and OK, but it appears that we are not. Though we are more than capable of equal retaliation.
There's a lot of misunderstanding about what these bots are capable of... and I keep seeing top ranking comments on Reddit talking about these bots daily.
The media seems to be spreading the idea they are autonomous agents automatically posting content on politically useful threads, with bot voting rings pumping them up. When in reality they still rely heavily on a human-intensive process, requiring lots of manual targeting and copywriting, and Reddit/Twitter/etc are very good at detecting voting-rings, having been perfecting algorithms to detect them for over a decade.
And do people think Russia or fringe right-wing groups have some super-effective autonomous bots that weren't also available to the best paid US consultants?
If these fake Twitter accounts, usually bots only followed by other bots, were understood in terms of the hundreds of millions of real users on Twitter, the billions spent by both real parties, and within the context of the technical limitations of bots, it wouldn't seem so scary.