Long Read: Astroturfing, Twitterbots, Amplification – Inside The Online Influence Industry
By Media Partner – The Bureau of Investigative Journalism: Heather Davis, born April 1991, lives in Jamestown, New York. She joined Twitter in June 2011 but doesn’t use it much. In over six years she’s tweeted just 331 times, the last time over a year ago when she wrote: “Heading to bed can’t wait to watch #ClashofChampions tomorrow night”. Her bio mentions her kids – a couple of them peek out of the banner photo at the top of her page – and offers a short vignette of her personality: “I speak the truth I’m a loyal friend I don’t like drama I’m honest.” Her Twitter handle is @TwIzTeD_bItCh.
Heather, aka @TwIzTeD__bItCh – call her Heather 2 – joined Twitter in April 2014. She’s more of a fan of the site than Heather 1, with quadruple the number of tweets in half the time. Also from Jamestown, her profile image shows the same woman, with a similar banner picture and similar bio – “I speak the truth whether you want to hear it or not”. Her last tweet, just before we finished this article, was a retweet of an article about “Healthy Living for your Brain and Body”.
The two Heathers exist in parallel, with only an extra underscore in the Twitter user name to separate them. One of them is a real person. The other is fake.
Heather 2 isn’t just into healthy living. Her interests – indicated by her retweets – span My Little Pony, Republican Congress candidate Omar Navarro, a new novel billing itself as “Casablanca in Washington DC”, “Things to do in Jaipur”, a guitar and piano instruction book, North Korea and bikini-clad shots of Philippine martial arts “ring girl” and model, Red Dela Cruz.
Back in July Heather 2 also took a passing interest in South African politics. Retweeting a comment from the pressure group Black Land First, she wrote: “The attacks are synchronized and well coordinated by agents of white monopoly capital. #HandsoffPublicProtector”.
The Bureau encountered the two Heathers during an investigation into social media manipulation – a phenomenon which is simultaneously ubiquitous and little understood. Two months after Heather 2’s sudden interest in South African “white monopoly capital” a British PR firm, Bell Pottinger, spectacularly imploded. The growing scandal is now threatening to engulf international auditing firm KPMG. In a small way, Heather 2 was responsible.
SafeSubcribe/Instant Unsubscribe - One Email, Every Sunday Morning - So You Miss Nothing - That's It
The phrase tweeted by Heather 2 – “white monopoly capital” – started showing up in South African social media back in October 2016, in the wake of a report by a government watchdog into allegations of corruption between President Zuma and the Guptas, a multi-millionaire family with mining, media and IT businesses (allegations denied by both).
The phrase was thrown at those attacking the Gupta family, the implication being that criticism of the Zuma-Gupta relationship was simply a rearguard action by the old white elite whose vested interests were threatened in the new landscape of post-Apartheid South Africa. Media investigations soon found that coordinated clusters of Twitter accounts were repeating the phrase in a manner giving the impression of a spontaneous grassroots reaction.
Bell Pottinger – a controversial PR firm which a Bureau investigation previously showed to have spread fake news during the Iraq war – had been working for the Gupta family since January 2016. A trove of leaked emails revealed that the firm had planned to deflect negative attention from the Guptas via a campaign highlighting the message of “economic emancipation“. A growing swell of public opinion in South Africa identified the firm as bearing some responsibility for the racially divisive white monopoly capital tweets. Bell Pottinger denied this, while eventually admitting that its actions on behalf of the Guptas had been “inappropriate and offensive“. Following a damning internal report, the company was expelled from the PR industry body and went into administration.
But to this day, the workings of the white monopoly capital campaign remain murky. Despite the leaked emails, months of investigative journalism and industry inquiries, no direct link has been confirmed between Bell Pottinger and the army of Twitter profiles which promoted the white monopoly capital message, and the social media campaign has continued after Bell Pottinger’s collapse. The truth is that no one quite knows how it came about.
Fake grassroots activity on social media – also known as “astroturfing” – has become a fact of political life around the globe. In the US, special investigator Robert Mueller is currently probing the role of Russia in a series of Facebook pages that supported Donald Trump in last year’s election. In the UK, fake Tinder accounts promoted Jeremy Corbyn at the last election and automated Twitter accounts promoted both sides of the Brexit referendum the year before. Academics and thinktanks in the US, UK and EU have pored over social media feeds and mapped networks designed to spread partisan or inaccurate news.
While there is much speculation over the forces presumed to be behind such campaigns, less is known about the mechanisms through which they operate and the network of companies that facilitate them.
The Bureau set out to untangle some of the threads in the Gupta case, and found itself deep in the bizarre, globe-spanning and secretive world of the online influence industry.
It’s an industry which, at present, has more or less free rein in public discourse, and it operates at a scale that is hard to imagine. “We detect and block approximately 450,000 suspicious logins every day that we believe to be generated through automation,” Twitter’s top lawyer Sean Edgett told the US Senate Committee on the Judiciary at the end of October. But metrics for how many logins Twitter’s systems fail to block are harder to come by.
“Oftentimes politically powerful individuals and groups make use of social media bots because they provide an additional layer of disguise,” said Samuel Woolley, research director of the Digital Intelligence Lab at Institute for the Future. “A political group will hire a contractor to deploy an army of human seeming accounts that are actually run by automated software, or bots. The goal is for these bots to amplify a particular idea, or even a political candidate – to give it the illusion of popularity.”
The issue is starting to attract the attention of lawmakers, particularly in relation to social media operations by state actors. “You have a huge problem on your hands,” Senator Dianne Feinstein told general counsels for Twitter and other tech companies at the start of November. “This is a very big deal.”
The Bureau’s investigation began with a dataset of tweets promoting the “white monopoly capital” message. Between 12 July and 22 August we archived almost 18,000 tweets containing this phrase.
Our dataset threw up some immediate patterns. There was a core of ten accounts which pushed the message strongly and consistently, tweeting the phrase over a hundred times in the 40-day period. There were just under 200 accounts with a lesser but still consistent interest, using the phrase between ten and sixty times over the same period. And there were some 6,000 accounts which used the phrase only once.
South African cyber sleuths focused their attention on the most active accounts. Digital investigator Jean Le Roux identified hundreds of accounts posing as South African citizens, often tweeting 30 times a day, several of which were seemingly run from somewhere in India. These accounts promoted a group of websites with names like wmcleaks.com (using the acronym for white monopoly capital), wmcscams.com, whitemonopolyafrica.com and whitemonopoly.com. An investigation by News24 showed that these websites were linked to an Indian consultancy firm, CNET Infosystems. But after this the trail went cold.
The Bureau decided to focus on the other way in which the message was amplified: by hundreds of accounts tweeting it only once, rather than dozens tweeting it hundreds of times. Our dataset showed that, of these one-off accounts, a suspiciously high number were created on certain days. A popular moment was April 2014: 46 of them were created on 15 April 2014, 45 on 11 April, 41 on 13 April, 32 on 12 April and 29 on 14 April. Another 200-odd dated from June 2012.
While many of the most active accounts appear to have had human beings behind them – albeit human beings pretending to be someone else – the Bureau’s data suggested the more infrequent tweeters were automated accounts, or bots.
“All bots for sure,” said Jim Vidmar, a social media marketing expert based in Las Vegas, looking over a sample of the accounts that the Bureau provided. “All April 2014 and they have the tell tale signs.”
Vidmar knows because he’s in the business himself. “Usually when I buy accounts like that … that is what they look like,” he explained, pointing to their shared creation date, the random nature of their interactions and the low ratio of followers to number of accounts followed. “Real accounts don’t follow 1500 random people and have such low follow back percentage.”
Vidmar referred to a “secret world” of websites selling retweets, followers and other such services. The Bureau looked at several dozen such sites, competing to make their clients appear “important”, “more legit” or “loved by all”. One company offers 100 retweets for $1 and 5000 for $27. Another promises a hefty 500 retweets for $1 and an astounding 20,000 for $25. For the more longterm strategists out there, there are monthly plans: one service provider offers a package of 150 retweets and 95 likes per any number of tweets, along with random comments and mentions and a certain number of followers, all for $299 a month.
So where do these attentive followers, likers and retweeters come from? And how are the low prices they command economically viable?
“The actual truth is that these followers are not real,” Elia Miller, who runs buytwitterfollowersfast.com, told the Bureau. “They are just real looking.”
In a surreal twist, we found a clue about which firm might have been paid to promote the white monopoly capital message via an organisation preparing for the robot apocalypse.
In the 1990s Eric Klien, a former stock market analyst, established The Atlantis Project. Its aim was to create a “floating city in the Caribbean Sea named Oceania— a city independent of the limitations and bureaucratic failures of present day government.”
The Atlantis Project was shortlived but Klien modified his vision and in 2002 set up the Lifeboat Foundation, “dedicated to encouraging scientific advancements while helping humanity survive existential risks“. The risks stem from the increasing sophistication of genetic engineering, nanotechnology and artificial intelligence, which Klien and others believe will lead to the “Singularity” – the point at which machines will create their own superintelligence, surpassing that of humans and bringing about the end of the human era.
The Lifeboat Foundation has a blog and a Twitter account, recently posting content related to Elon Musk, cryogenics, nanoparticles and existential hope. This account itself is retweeted regularly by other accounts. One of them was @Jake0Knudsen.
@Jake0Knudsen retweeted the Lifeboat Foundation’s content on 16 July: “AI Creates Fake Obama: Artificial intelligence software could generate highly realistic fake videos of former president”. Around the same time he got interested in South African politics, retweeting Black Land First’s leader: “Mbeki says there is no white monopoly capital. At the rate the agents of WMC are going to deny apartheid existed.”
Also around this time, @Jake0Knudsen was retweeting content about advice for writers (“Author coaching or manuscript editing?”), gun control in Washington state, Trump-Russia collusion, dominatrix pornography and clips from Balkan news site Kratke Vijesti. These are just a few of the topics he touched on.
All this activity would have come as a surprise to @Jake3Knudsen – whose banner photo, profile photo, location (“Shelton”) and bio (“Jessica Johnson is baee”) are all shared by @Jake0Knudsen. But @Jake3Knudsen – whose account was created in November 2013, five months before @Jake0Knudsen – hasn’t tweeted since January 2015.
In promoting the Lifeboat Foundation’s content, Klien did sometimes use a retweet service, he told the Bureau. He was noncommittal about its efficacy, saying that “it costs some bucks and is of marginal value so we would only do this as long as our endowment fund is flush.”
The service he used, he told the Bureau, was called Twitterboost, also trading as Devumi.
Devumi was established in 2011 in Florida. In the words of its lawyers, “it provides digital marketing services that guarantees to increase Twitter followers, Twitter retweets and likes, Youtube views, YouTube subscribers, Youtube likes and comments, Soundcloud plays, Soundcloud followers, Soundcloud likes and reposts, and comments, Vimeo plays, Vimeo followers, Pinterest followers, Pinterest likes and repins, Linkedin followers, Linkedin connections, and Linkedin endorsements.”
It seems pretty successful at this, claiming over 200,000 customers. The company’s CEO, German Calas, describes himself as “Serial Entrepreneur, Visionary, Philanthropist, Ninja”. “I’m one of the few bosses that pay my employee’s to tweet,” he wrote in 2012. “I think most of you would enjoy that. lol.”
Calas might not be laughing quite so loudly about his employees now, because some months ago one of them hacked his company email account and made off with his client list. According to Devumi’s complaint, filed in the Southern District of Florida in August, Calas engaged a contractor by name of Ronwaldo Boado to provide “order management and support for Devumi and its sister companies, including but not limited to TwitterBoost.co”. Boado, as a contractor within Devumi’s “Order Success Team”, had access to “confidential information and trade secrets contained therein”.
According to Devumi’s court filing, Boado was fired for “inciting conflict” between other team members. He retaliated by taking control of the firm’s email account and used it to cancel previous orders and redirect them to a copycat company he himself had set up, combining two of his previous employer’s business names into the portmanteau “DevumiBoost” to make customers think they were simply paying the same company.
Devumi, attempting to subpoena Boado, retained Crowe Foreign Services, a specialist legal process server. But Crowe’s operatives were unable to locate Boado: his address in the Philippines turned out not to exist.
The Bureau called a mobile number associated with Boado in the court documents, and spoke to a man who claimed to have no idea what we were talking about.
Like one of the ephemeral Twitter accounts that he managed, Boado had disappeared without trace.
The chief executive officer of Devumi has not responded to requests for an interview.
To confirm whether Twitterboost (aka Devumi) ran the bots participating in the white monopoly capital campaign the Bureau purchased 5000 retweets from the company for a specially set up Twitter account. By tracking which accounts repeated our content we saw that two of them had also featured in our white monopoly capital dataset. The two common accounts were Pigs and Pints (@PlgsandPints) and LBDT (@VaieenCaarp).
Pigs and Pints appears to have been cloned from the account of a restaurant on Australia’s Gold Coast, with a letter L substituted for the letter I in the user name. @VaieenCaarp appears to be based on an account belonging to Valentin Mesa, known on Facebook as valen.carp.39. (CARP and LBDT – “Los Borrachos del Tablon” – refer to Argentinian football team Club Atlético River Plate.)
Both of these accounts are now suspended. We asked Devumi for a comment but none was forthcoming.
Firms like Devumi have to maintain large herds of accounts in order not to be too badly affected when some of their roster fall foul of Twitter’s anti-bot algorithms – the recent fate of @VaieenCaarp and @PlgsandPints. As a result, there is a flourishing market for bulk accounts. Bio (and bio language), profile image, phone verification, number of tweets, account set-up date, regularity of tweets and number of followers are all factors in the cost of an account. Older accounts and those with a more detailed history of interactions are more expensive. Newer accounts – especially those lacking profile pictures, known as “eggs” in Twitter parlance – are cheap.
A glance at one of the foremost account marketplaces gives an idea of the variety of services available. Users of the Black Hat World social media forum (“If you gotta ask – you’re in the wrong place”) buy and sell email, Facebook, Twitter, Tumblr, Pinterest, Youtube, Instagram and many other accounts on a daily basis. Accounts can be “aged”, “phone verified”, “with Real followers” or tailored to other specifications.
A recent advert has Facebook accounts ranging from 50 cents (new, non-phone-verified) to $6 (one to two years old, phone verified) each; one to two-year-old phone verified Twitter accounts sell for $2 each with newer ones at 60 cents. Sellers also offer tools for bulk account creation, remote SMS verification, commenting and upvoting services, “viral content finders” and complete marketing packages like the “Twitter Money Bot” which scrapes tweets, bios and pictures then follows and unfollows, retweets, likes and generates replies.
Miller, the founder of buytwitterfollowersfast.com, was candid about how the industry ticks. “If there’s a seller out there who’s claiming that his followers are real, he’s lying. Think about it, you can’t expect to get 500 real Twitter followers for just $7, that’s just not doable.”
Some firms take a different line. A representative of Socialsbox – who did not give a name but agreed to a Skype chat – told the Bureau that the company was “using more legitimate way of increasing retweets then just freshly created accounts”. “All accounts belong to someone,” the representative said, explaining that a sort of exchange system operated whereby people followed and retweeted others “to earn credit which later on can be used to gain same service.”
To create a large number of bots, it helps to have a large number of email accounts. Jim Vidmar, the Vegas-based marketing expert, explained that this can be achieved through Russian mailservers like mail.ru or inbox.ru, which are “very liberal” – that is, they don’t mind how many email addresses you create. You can then use these addresses to verify your Twitter accounts.
Bulk Twitter accounts are plentiful, and users are picky. “Account quality is not great,” complained one Black Hat World user about an order he had placed. “These are obviously fake accounts … The issue was the username, very similar to @fh34odbh303n.”
Vidmar takes care to “brew”, as he puts it, the accounts which he buys. “I don’t want Russian eggs.” Brewing the accounts – setting up a profile picture, a bio, following some people and making a few tweets – can take “weeks and you need to be fluent in multiple bits of software”. Nonetheless, when operating on a bulk scale, “you can’t make bots totally different from each other. They have to have similarities.”
The similarities we observed in the “white monopoly capital” accounts we surveyed indicated that many of them were likely clones of pre-existing accounts belonging to real people. Rather than having garbled, machine generated names like “@fh34odbh303n”, they scraped – that is to say, copied – names, user names, bios and profile and background images from real users, simply changing a character in each user name to keep it unique. At first glance these altered characters are easy to miss: two underscores instead of one, l instead of i, a substitution of a different digit.
Other researchers have come across similar uses of cloned Twitter accounts. A recent study by Kate Starbird of the University of Washington uncovered a “very sophisticated botnet” using cloned accounts to amplify a popular conspiracy theory website, therealstrategy.com. Starbird noted that the fake account network “seems to be effectively bringing ‘real’ accounts into its friend/following networks”.
Persuading real people to participate in your network and disseminate your narrative is the ultimate goal of all astroturfing operations. Le Roux, the South African investigator, was ambivalent about the success of the white monopoly capital campaign in this regard. To the extent that it tried to push a more positive portrayal of the Gupta empire, it “failed dismally”, he told the Bureau: once fake accounts were exposed, “anyone (even real persons) who tweeted in support of the Guptas would be labelled a ‘Guptabot’ and ridiculed”. On the other hand, and more seriously, the activities of the bot network “robbed South Africans of the opportunity to have a frank and untainted discourse about white monopoly capital and its effects”.
Examination of the bot network appearing in the white monopoly capital campaign offers an insight into how diverse and widespread the use of paid social media really is. Twitter accounts involved in it were also promoting Chinese news agency Xinhua, a Kuwaiti video channel, a British betting website, US-focused sports and sports-shoe fanpages and lists of daily shopping bargains to name just a few. (Some account holders contacted by the Bureau denied paying for retweets and suggested that the bots had come across their content randomly or by following trends – “I don’t pay any so I guess I’m getting some free love,” said one.)
Transforming the raw material of automated followers and retweeters into a real commercial or political effect is a complicated alchemy, however, as the relative failure of the South African campaign shows. As one US-based digital strategist put it, “Anyone can set up a bot that can tweet ‘Donald Trump is great’ 10 times a day – to do it well is expensive”.
Jim Vidmar, though, is thoroughly convinced of the power of automated social media, if done properly. “I can crown people,” he told the Bureau. “I can decide who’s going to be famous and not famous for this hour. If a guy is 100 tweets down, I can put him up. Boom. That’s the dangerous part. I can decide who is good and who’s bad.”
The Bureau reached out to the two people mentioned in the article whose Twitter accounts had been cloned but no response was received.
By Croften Black and Abigail Fielding-Smith
Additional research by Alice Milliken.