This is part two of our three part series on bad ad networks. In our prior post we argued that malicious ad networks are a bigger problem than malicious publishers. In this post we will describe some simple checks anybody can run to detect malicious ad networks. Our next and final post in the series reviews the effects of turning off fraudulent networks. Please visit http://molocoads.com/ for more information.

 


Check Your Ad Networks Today

You are stuck running ad campaigns in a world with many bad ad networks. What should you do? How can you distinguish the good players from the bad players?

We recommend three simple checks. With these checks, you can detect most bad ad networks. Best of all, you already have all the data you need from your Mobile Measurement Partner, or MMP (i.e. Tune, Adjust, Kochava, AppsFlyer). Most MMPs already provide all the information you need, even if it’s not readily highlighted.

Network Check 1: Monitor the Click-to-Install Ratio

One of the most common fraud tactics today is click spamming. This is so important we will be dedicating a whole separate post to the subject in the future. The important thing to understand now is that click spamming involves an ad network reporting fake clicks to your MMP. As a result, bad networks tend to drive a much larger volume of clicks relative to installs.

Therefore, to combat click spamming, you need to examine the click-to-install ratio broken down on a per ad network basis. One should expect that, for any given advertiser, the click-to-install rates should be fairly consistent across ad networks. When we ran this analysis for many of our advertising clients, however, we observed that each advertiser saw their click-to-install ratio vary significantly across networks:

 

Bad Ad P2_1 
Fig 1: Click-to-install ratio by ad network for a specific campaign. There is a variation in the degree of frequency, but we found that all ad networks except 18 and 19 were engaged in click spamming.

The results portrayed in Figure 1 are shocking. The click-to-install ratio of ad networks varies from 9,015:1 to 8:1. In other words, the worst network reported 1,126X more clicks per install than the network with the lowest click-to-install rate.

Unless this ad network has an unorthodox targeting option for “people who click but seldom install,” this stat does not make any sense at all. When we performed deeper log analysis, it turned out that all the ad networks, except number 18 and 19, reported fake clicks. It seems that click spamming is actually a standard industry practice.

What should you expect when you perform this analysis for yourself? From our research, we find that there is no “correct” value, it can fall anywhere within a range of 5:1 and 100:1 depending on the nature of your app. You should check that the number is consistent across ad networks, if you see a high degree of variation you should examine each network more closely. If you have a lot of data, you may be able to perform a more granular analysis by further breaking this down by publishers or formats, to get a more apples to apples comparison across networks.

Network Check 2: Identify Time-to-Install (TTI) Greater Than One Hour

TTI (Time-To-Install), sometimes called CTIT (Click-To-Install-Time), is a key metric to detect ad fraud. It hinges on figuring out what a normal time-to-install should be.

Our studies show this number should be short, within ten minutes. The typical behavior is that a user will click an ad, download the app, and launch it shortly thereafter. Of course, some late installs are legitimate. Sometimes we forget we downloaded an app and launch it after a few days. This would register as a TTI of days, since the install is recorded at the time of launch. Still, according to Appsflyer, the rule of thumb is that fewer than 25% of installs happen after one hour from click. It is a red flag if the value is higher than this.

 

Bad Ad P2_2 
Fig 2: Install breakdown by ad network, shaded by TTI less than (green) or greater than (red) 1 hour. The rule of thumb is 75% of installs should happen within one hour from click. Only 5 out of 19 networks met this expectation.

Revisiting the same campaign from above, Figure 2 shows the percent of installs whose TTI is over an hour. Only five ad networks (26%) met the rule of thumb here. In all the other networks, the TTI on the majority of installs was over one hour, a strong signal of fraud. They demonstrated a very weak correlation between clicks and installs.

If you’re skeptical, we can provide further evidence. Figure 3 shows 24 hours worth of data for one offending network. At 9 AM every day the network generates a ton of fake clicks to the MMP for two straight hours. For the rest of the day installs accumulate. These users never actually clicked, so we can assume they are simply installing the app organically. Using this method, the offending network is surreptitiously stealing credit for organic installs.

 

Bad Ad P2_3 
Fig 3: This bad ad network sends out fake clicks for two hours every morning and reaps the benefit of poached organic installs throughout the day.

The important trick the network used here is to not include user id (IDFA or Google Ad ID) because this would actually lower their chance of receiving credit. By excluding user id they allow for fingerprint attribution, which increases the likelihood of assigning credit where none is due. Fingerprinting was a great advancement in mobile attribution, but bad networks are exploiting it as a loophole. We therefore recommend a very tight lookback window for fingerprinting attribution (generally 1 hour).

Network Check 3: Utilize the App Store Download Time

Before November of 2017, click injection was a major issue throughout the Android ecosystem. To combat the problem, Google released their Play Install Referrer API, which very helpfully provides a download timestamp that can better identify many kinds of fraud. This timestamp should fall neatly between the click timestamp and the install timestamp (which, as we mentioned above, actually registers at the time of first app launch). Armed with this additional data point, we were able to nicely identify and bucket fraud for one client:

 Bad Ad P2_4
Fig 4: By comparing click, download, and install timestamps we could clearly detect that 30% of this advertiser’s traffic was fraudulent.

Here is how we used the download timestamp to identify these three buckets of fraud:

  • 3.5% of reported “installs” never actually had an accompanying download timestamp and therefore should not be attributable installs.

  • 3% we classified as click injection because the click occurred right after the download, indicating a malicious SDK detected an install and rushed to steal attribution.

  • 20.5% had the download occur a long time (>10 minutes) before the click, indicating the advertiser was taking credit for a very old install.

The download timestamp passed by the Play Install Referrer API turns out to be incredibly useful towards catching many kinds of fraud. This is a relatively new development, and it may be the case you are not able to access this data if you are using an older version of your MMP’s SDK. We therefore urge you to immediately update your SDK to the newest version or you will not be able to identify these kinds of fraud.

Note that iOS handles this problem a little differently than Android. Instead of a download timestamp, they will provide an install receipt that can be used to verify the download. Some MMPs already incorporate this into their attribution, you should check with your MMP to confirm they are considering this signal.

Conclusion: Fighting Ad Fraud is Possible Today

These three checks are all easy to complete and use the tools you likely already have at your disposal. However, these steps are not exhaustive. Also, as we shall see in our next post, fraud tactics are always evolving, so we expect that new techniques will be required in the future.

For these reasons we encourage you to share any tips and tricks you’ve developed with us. It will require a concerted effort among many parties to clean up the mobile ad ecosystem.


This is part two of our three part series on bad ad networks. In our next post we will demonstrate the effect of removing malicious ad networks on organic installs. For more information please check our prior post on why malicious ad networks are a bigger problem than malicious publishers. Our next post in the series covers the effect of turning off fraudulent ad networks. Visit http://molocoads.com/ for more information.