Two computer science students create tool to detect “bots” on Twitter

Two computer science students created a Google Chrome extension that when clicked tells you if a Twitter user appears to be a bot or not.

They claim it has 93.5% accuracy[1] (but see the footnote for a hint at some of the problems in how they came to that conclusion). It uses “machine learning” technology to attempt to identify Twitter accounts that may be automated “propaganda” accounts. Per the article, their classifier was trained using Tweets identified as left or right leaning – and those which they could not categorize as left or right must be bots. Or something. Regardless, that implies political views play a role in classification as a bot. Would a bot tweeting about cats be identified? Would a propaganda bot promoting backyard gardening be identified?

The results could be manipulated by users. When the bot check reports its results to you, you can optionally agree or disagree – and that information gets fed back to the classifier. A sufficient number of users could likely re-train the classifier to intentionally classify real people as bots, and bots as real people.

Source: The College Kids Doing What Twitter Won’t | WIRED

I am not convinced that software tools can classify propaganda bots with sufficient accuracy to be useful over the long term. There will be an arms race to create better bots that appear more natural. I fear that such tools may be used to stifle speech by incorrectly – or deliberately – classifying legitimate speech as “bot” generated to have that speech throttled down or banned.

Note also that Twitter – and Facebook – profit by having emotionally engaged users reading, liking, sharing and following more people. It is not yet in their financial interest to be aggressive about shutting down bots.

Footnote

How good is 93.5% accuracy? Let’s consider a different example to understand this: the use of drug search dogs in schools to locate drugs in lockers.

Let’s say the dog has a 98% accuracy in finding drugs in a locker and a 2% false positive rate. Further, let’s assume there are 2,000 lockers in the school.

Let’s assume 1% of the students actually have drugs in their locker.

1% of 2000 students means 20 students actually have drugs in their locker. (And with the 2% false rate there is a chance that 1 of these actual students will be missed.)

In using the dog, the police will identify that 2% (the false positive rate) of the lockers incorrectly or 40 lockers will be suspected of having drugs in a school where only 20 lockers have drugs.

In other words, twice as many students will be falsely accused of having drugs as students who actually have drugs.

When doing broad classification searches, even a 98% accuracy rate is problematic as it may produce more false negatives than true positives, which is not what you would intuitively guess when you hear “98% accuracy” or in this Twitter bot analysis, 93.5% accuracy.

Further, in determining their 93.5% figure, while their approach is admirable and possibly the best that can be done, they compared verified Twitter user tweets to suspected “bots” from unverified accounts. Most Twitter accounts are unverified and they are only hypothesizing that an account is a bot when producing this metric. (FYI I think they have done an excellent job with their work, the best that can be done, and am impressed with their work. My comments should in no way be interpreted as negative comments towards these two students. For the record, I have a degree in computer science and an M.S. in software engineering and have familiarity – but not expertise – in machine learning, classifiers and pattern matching systems.)

Indeed, as the article points out, hundreds of people have already complained to another bot checker about being falsely classified as a bot. The Wired reporter attempted to contact the account holders of a small sample of accounts identified as bots and quickly found accounts that appeared to be run by real people.

Side note: the linked article in Wired is excellent journalism, something I certainly do not see enough of! Glad to see this article!

Could you throw a U.S. Presidential election for just a few dollars per day?

Parties in Russia bought ads on U.S. social media regarding candidates for U.S. President in 2016. About $100,000 was spent on Facebook ads, of which only 44% occurred prior to the election. About half of the Russia connected ads were bought using computers that appeared to be in the U.S. according to Facebook

The leading U.S. Presidential candidates raised (and presumably spent) a little over $2.1 billion dollars for their campaigns, according to OpenSecrets.org citing data from the Center for Responsive Politics. It is not clear from the listing if this includes the primary election phase or the election in November. Let’s assume that it includes the primary elections too.

Let’s further assume this money was spent starting from January 1, 2016 on wards to the election on November 8, 2016. This comes to a little over $6 million spent every day on campaigning from January 1st up to November 8th.

Meanwhile, according to Facebook, actors in Russia placed ads on their social media platform. According to Facebook,

“For 50% of the ads, less than $3 was spent; for 99% of the ads, less than $1,000 was spent”

Additionally, only 44% of the ads appeared prior to the November 8th election with 56% appearing afterwards. Which suggests that in terms of the pre-election campaign, the dollar values in the above quote can be cut in half to about $1.50 and $500.

How were these ads identified as having originated in Russia?

According to Facebook, they used a variety of methods including “very weak signals of a connection” including ads bought from U.S. accounts using U.S. IP addresses but with their computer attributes set to support Russian language and Cyrillic character set:

An allegation has been made that the purchase of these Facebook ads threw the election for Donald Trump.

  • If this is true, then $44,000 worth of Facebook advertising is the most powerful and economical form of persuasion in all of human history. You really can throw a national election for a few dollars per day in spending!
  • Think about that for a while. That implies Facebook is the grandest propaganda platform in world history and we have already lost everything.

Is it legal for foreigners to “speak out on global issues”? Yes.

Last night I watched a Youtube video from a young woman in Norway discussing her thoughts on the U.S. election and which U.S. candidate she supported, while noting that as a Norwegian she has no say in the U.S. election process. Similarly, I have seen social media posts from U.S. citizens commenting or advocating for positions in other countries including Israel, Pakistan, India, Mexico, Canada and places such as the European Union.

The U.S. does have laws regarding foreigners actively participating in the U.S. elections but the U.S. does not have laws prohibiting foreigners from having public comments and opinions about U.S. politics (nor could the U.S. do so).

Consequently, foreign actors can – legally – post items on social media that may be interpreted as influencing U.S. elections. No one has attempted to characterize the impact of such posts.

What Does It Mean?

I have written a number of posts about social media propaganda connected to Russia. Many of the published examples of Russia connected social media propaganda posts have similarities to online, social media-based, for profit, “fake news” publishing businesses.

  • My view is that it likely happened (but I have no way to know, only to interpret the propaganda messaging directed at all of us to persuade us that this occurred).
  • A portion of the messaging was likely related to for profit “fake news” publishing businesses creating emotionally laden click-bait links for ad revenue.
  • The evidence, including from the U.S. DoJ indictment conclusions, is that the impact was minimal – or you have to believe that Facebook ad buys are many orders of magnitude more effective than any other media outlet – but then why would people still be buying ads on TV and radio and newspapers?
  • It’s impact must be viewed in the context of the massive amount of social media propaganda spread by organizations, individuals and U.S. based fake news business operations. For example, I readily see social media propaganda messages shared and liked everyday – yet Facebook itself says I never saw a single Russia connected ad or post.
  • The evidence, including the U.S. Department of Justice indictments, found that Russia connected propaganda had no impact on the U.S. election outcome.
  • Just as Russia connected actors are believed to have attempted to persuade U.S. citizens on a variety of social issues, using social media, U.S. connected actors are attempting to persuade us that the election was manipulated by non-U.S. actors. In other words, there is a propaganda messaging battle underway.

The main benefit of an investigation into the allegations of Russia connected actors throwing the U.S. election is an awareness of the power of social media platforms for the frictionless spread of propaganda messaging. Unfortunately, so far, there has been little attention given to propaganda in the broader context – Russia connected propaganda were not the only propaganda operations and were likely a very tiny fraction of the overall propaganda efforts on social media.

We are missing a huge opportunity to understand and address these issues – the consequences of unbridled social media propaganda operations coming from numerous parties inside the U.S. and around the world. We are missing this opportunity because of a politically driven focus on Russia and avoiding the root issue: the frightening power of social media as a frictionless platform for the spread of propaganda.

The danger is not only Russia – or China – or U.S. based propagandists – the danger is the frictionless platform of social media for propaganda messaging.

Society now requires us to be liked on social media?

Not only is a “strong social media presence” now a prerequisite for many, if not most, jobs, but companies have begun to look at your number of followers as both a measure of monetary value and a career determiner. And according to TIME.com, employers actually consider people without Facebook suspicious. “If you boycott Instagram, you’re cutting yourself off from a lot of opportunities,” said Emily. “I started posting more selfies, despite being self conscious about it, because honestly—you get so many more ‘likes’ when you post a selfie. And you get so many more followers.”

Even Karimi admitted her social media absence is holding her back.

Source: “,,Why don’t I look like her?”,, How Instagram is ruining our self esteem

Many of us notice that popular social media personalities we  are recommended to follow appear to be mostly young and attractive. I went looking for commentary on that subject and ran into the above item.

Using social media presence and likes would appear to be a new form of potentially illegal job discrimination, particularly when it becomes a de facto proxy to preferentially hire young, healthy, good looking people.

U.S. government indicts Russian social media propagandists

Special Counsel Robert Mueller on Friday indicted 13 Russians for violating U.S. laws to interfere with the 2016 elections.

The indictment says the Russians acted in favor of Donald Trump and against Hillary Clinton — but also says the Trump campaign’s connection to them was “unwitting.”

They also acted against Trump rivals Ted Cruz and Marco Rubio and in favor of Clinton rival Bernie Sanders.

Source: 13 Russians charged with interfering in U.S. elections – MarketWatch

Assuming the allegations are true, this does not solve the propaganda problem on social media. Social media propaganda operations are conducted by individuals world-wide, including in the U.S., each with a variety of messaging goals. The alleged Russia-related activities are just one part of global propaganda operations conducted by many actors. If we hyper focus on one actor and ignore the others, we are not solving the social media propaganda problem.

Similarly, election officials are working to improve security of election related information systems. As the NY Times notes, “Experts have warned for years that state and local election equipment and security practices were dangerously out of date…” Until claims of Russia-related hacking came along, officials did not seem concerned about their security deficiencies – which says a lot about the competency of those running election systems.

The U.S. Election Assistance Commission issued a set of guidelines for election related technology systems. Sadly, their recommended requirements have been missing in many election related systems up until now.

How “Bot Armies” get Twitter hashtags trending

Of interest, a bot army is said to have “taken to Twitter” to influence Twitter social media posts. Bots generate enough Tweets that eventually get shared and then turn into actual hashtag memes passed along by real people. In this way, propaganda bots can initiate and control messaging on social media.

This is also known as “computational propaganda”. In the old days, propaganda usually required a printing press or a broadcast license. Social media made it possible for everyone to be a propagandist. Computational propaganda creates fully automated propaganda dissemination.

Source: Pro-Gun Russian Bots Flood Twitter After Parkland Shooting | WIRED

 

Sad and scary danger for those who live their lives online-crazed fans

An obsessed viewer of a Youtube channel sought to kill a Youtube star. One Youtube star has already been killed by a sick fanatic.

Source: They shared their lives on YouTube. Then an obsessed fan came calling – with a gun – SFGate

A surprising number of people – almost always young couples living exciting or unusual lives – are making a living posting videos of their lifestyle online. Sadly, this activity has put some of them at risk of harm by sick fans.

This post is not about social media propaganda – but about one of the unfortunate side effects of some aspects of social media.

Social media viral meme: 18 school shootings since January 1?

According to the Washington Post, the number is fake. It was spread on social media as propaganda from a non-profit organization.

Source: No, there haven’t been 18 school shootings in 2018. That number is flat wrong. – The Washington Post

This post is not about pro-gun or anti-gun issues but about the use of social media for propaganda efforts.

A number originating from a non-profit sounds legitimate – but its actually using the “Appeal to Authority” form of argument combined with lying.

Within a short time interval, this fake claim spread rapidly on social media and became a “fact” that professional news reporters picked up and reported.  Once a fake “fact” is published by the professional news services, others will then use that as verification that the claim is true.

From a propaganda effectiveness perspective, this gets an A grade.

It is perplexing why the group chose to exaggerate their count as there are sufficient numbers of shootings to make their point without resorting to being misleading. One would think being misleading would lead to subsequent distrust.

However, remember that in propaganda messaging, the first message people hear is the one that “sticks” – even if subsequently shown to be untrue or misleading. This is why this technique – exaggeration or misleading information – is very effective as a form of propaganda.

When combined with social media sharing, false claims can be widely distributed to the point they turn into “facts” that stick in the mind of the target.

Update: From the comments to the WaPo article, many suggest its okay to be misleading if it leads to someone’s desired conclusion. Or something.

Advertisers seek social media platforms that promote positive impact

As this blog has noted, much of social media has devolved into a culture of perpetual outrage by angry people often consumed with hate. This sort of social media is not fun to be around. Advertisers are noticing this too:

“Unilever will not invest in platforms or environments that do not protect our children or which create division in society, and promote anger or hate,” Unilever Chief Marketing Officer Keith Weed is expected to say Monday during the Interactive Advertising Bureau’s annual leadership meeting in Palm Desert, Calif.

We will prioritize investing only in responsible platforms that are committed to creating a positive impact in society,” he will say, according to prepared remarks.

Source: Unilever Threatens to Reduce Ad Spending on Tech Platforms That Don’t Combat Divisive Content – WSJ