Automated Activity Versus Authentic Activity – Why You Should Care

For years we have skirted around calling bots, bots. The industry standards-setting bodies like MRC (“Media Ratings Council”) calls bots IVT (“invalid traffic”). There are even two kinds of IVT – General IVT (bots) and Sophisticated IVT (bots). But that just serves to confuse, not clarify. So let me clarify.

I’ll use a recent example to illustrate — the Twitter bot account problem, that lots of folks have heard about by now because Elon Musk asked about it publicly. Twitter claims spam accounts are only 5% of all accounts on Twitter; Musk, and everyone else, speculate it’s way higher than that. Many other sources and services that monitor fake accounts have shown numbers far higher than 5%. In fact, some of these outside services show Musk himself has 70% fake followers, which amounts to 10s of millions of Twitter accounts. But because none of these outside companies have the true count of the total number of Twitter accounts, they don’t have a reliable denominator to compute the percentage of accounts that are fake.

No alt text provided for this image

To get a sense of how large a problem fake accounts are, we can look at Facebook’s numbers self-reported in their transparency center. The figures in the chart above are the numbers of fake accounts they remove per quarter, with the annual totals in red. In 2021, for example, Facebook removed 6.5 BILLION fake accounts from their platform, usually removing more than 1.5 billion per quarter. That’s a LOT of accounts, considering how strict Facebook is with new account creation. They have made it truly hard to do so, and require a new unique phone number to create each new account. But despite these strict protocols, bad guys are still able to create enough new accounts to stay ahead.

Other social networks have lax or non-existent deterrents to fake account creation. Hackers can even automate the large-scale creation of new accounts, e.g. on Twitter. Independent researchers like conspirator0 have detailed documentation of the large quantities of accounts created within short periods of time (thousands of new accounts in a day), and detailed data on the automated activity generated by these accounts. These are real accounts, but the activity is automated (i.e. done by scripts and algorithms, not by humans.) Computer scripts can generate enormous amounts of activity, such as likes and retweets. A human cannot physically “like” 1.2 million tweets and retweet 600,000 tweets in a short period of time.

Here’s a long thread showing automated accounts are part of mDAU (monetized Daily Active Users) on Twitter: https://threadreaderapp.com/thread/1528849855363305473.html

Authentic human activity online or on social networks is finite and small; and the amount of time they spend on social networks is finite and small too. How often do you go on social networks such as Twitter, Instagram, and LinkedIn, and how long do you continuously use them? Let’s say you use 5 hours of each per day, reasonable? Well, even with that amount of human usage, absurd numbers like the following still can’t be explained – 1.7 billion views on DSW’s hashtag challenge in days, or 8 billion views for haircare brand Paul Mitchell.

No alt text provided for this image

While it is not realistic for humans to watch Tiktok videos over and over until their eyes fall out, it is easy for bots and automated activity to generate truly huge numbers. Understanding any number you see in digital requires you to understand the difference between automated activity and authentic activity (i.e. manually done by humans). Common sense should help you. Your own usage experience can also be your guide.

I understand that Candy Crush was hugely popular, and I still see some folks playing it on the subway. But it’s not reasonable to believe that tens of thousands of games have the same levels of human usage, especially games that no one has ever heard of. How do any humans download and play mobile games they’ve never heard of? How do those games generate so many billions of ad impressions to put them into the top 10 grossing apps (by ad revenue)? How? It’s not authentic human usage. It’s automated activity from mobile emulators and malware, which can install apps and remotely control and play them 24/7 to maximize the number of ad impressions that can be created and sold.

To summarize, bots are a term that applied well to fake visitors to websites. It is not as applicable to social media, especially mobile in-app usage. Also, accounts on Twitter are not fake accounts, they are real accounts, but the activity from those accounts is automated. So using the terms “automated activity” versus “authentic activity” might be easier to understand. On Twitter, a relatively small percentage of accounts can generate the vast majority of activity, because they are automated by computer scripts. That also means the vast majority of ads in the Twitter stream are shown during this kind of automated activity, not when humans are active on Twitter.

No alt text provided for this image

Most other forms of ad fraud are generated by automated activity, not authentic human activity or usage. The “loudest alarm clock” app sells more ad inventory than ESPN, and nearly as many as Spotify and Hulu, on a monthly basis. But no one has ever heard of that app, or needs to use it because they have clock apps built into their phones. There’s something strange about this alarm clock app’s numbers, no? Activity and usage automated by computer scripts can easily explain this; humans’ authentic usage and activity cannot.

In CTV (“connected TV”) ad fraud, humans’ activity watching streaming content cannot explain the tens of billions of CTV ads. But automated activity where python scripts rotate “28.8 million US household IP addresses, 3,600 streaming apps, and 3,400 CTV device models” can (see: Oracle discovers largest ever CTV fraud scheme January 2021.).

Hopefully, the distinction between automated activity versus authentic activity is useful to you in thinking about ad fraud and the many, many ways bad guys can commit it and scale it, far beyond humans’ actual usage and activity.

Related Posts:


Dr. Augustine Fou has been on the front lines of digital marketing for 25 years. It is from that vantage point that he studied and documented the nexus of cybercrime and ad fraud. As an investigator, Dr. Fou assists government and regulatory bodies; as a consultant, he helps clients strengthen cybersecurity, and mitigate threats and risks, including the flow of ad dollars that fund further criminal activity. Be sure to check out his many articles on this LinkedIn profile page or at FouAnalyt10s. This article originally appeared on Dr Fou LinkedIn profile and is reprinted here with the author’s permission.

If you like our content please subscribe and share it on your social media channels. thank you!

Scroll to Top