This is part three of our three part series on bad ad networks. In our prior posts we discussed the scope of ad network fraud and simple checks you can perform to combat the problem. In this post we look at the rapidly evolving future of ad fraud. Please visit http://molocoads.com/ for more information.

 


Suppose you have followed our checks and detected a few bad ad networks. What now? The most important thing is to immediately stop using the bad ad networks. Unfortunately, many people find it difficult to quit their ad partners due to two practical concerns.

Concern 1: Effect of Pausing Networks on Total Install Volume

You might be worried about cutting off bad ad networks because of the potential for installs to drop. This is totally understandable — as a marketer, you always live with the pressure of driving more installs. You may feel it’s better to accept a certain amount of fraud, in the hope that the good will outweigh the bad, or the thought that you can squeeze vendors for refunds.

We studied this topic extensively, and we found you do not need to worry that shutting off bad networks will negatively impact your install count. In fact, bad networks’ fraudulent installs are mostly generated by poaching organic installs. When you stop using a bad ad network, you’ll save a lot of money and your total install count will hardly change.

 

Bad Ad P3_1 
Fig 1: Overall effect of removing bad ad networks and reassigning their budget on install counts.

Figure 1 demonstrates the before and after effects of removing bad ad networks and reassigning leftover budget. The left set of bars labelled “Current” demonstrates a real campaign receiving about a third of their install volume, or 30K daily installs, from a mix of ad networks.

The second set of bars labelled “Fraud Removed” demonstrates what happened when they cut out the ad networks we flagged as potentially fraudulent. The red bar representing paid install count shrinks by 80%, providing an equivalent save on ad budget. Note that the green bar representing organic installs increased to offset this drop in paid spending such that the yellow bar signifying total installs barely budged.

The third set of bars shows how they can reassign the initial budget to the remaining good networks and deliver a 25% increase in total installs on the exact same budget. This is a no-brainer deal!

Concern 2: Bad Networks Who Show Great Retention Rates

The second potential concern is the great historical retention rates of bad networks. Marketers are always looking for high-value users, and retention analysis is considered the ultimate way to detect fake installs (e.g. install farms). Most paid channels deliver worse retention rates than organic users, and therefore marketers love when they find high retention ad networks. Even when such networks raise red flags, it is not easy to cut them out from your campaign.

If you find such a seemingly effective network, however, the high retention users you’re enjoying may have been poached from your organic users. If these networks are successfully stealing credit for your organic installs, then of course they would show a great retention rate. Therefore, it is critical for savvy marketers to detect and exclude bad ad networks before running retention analysis.

Figure 2 is an example from the same campaign we discussed earlier. At first glance the retention looks very good — 80.9% of the rate of organic installs. Unfortunately, it turns out that most of these installs (90.1%) were poached from organic traffic. After excluding organic installs the retention rate plummets from 80.9% to 47.3%, revealing the true value of paid users from this bad ad network. Retention rates are important, but can be highly deceptive.

 

Bad Ad P3_2 
Fig 2: Retention rates can be misleading: When poached organic installs are excluded, relative retention rate (organic is 100) drops from 80.9 to 47.3.

Case Study: Advertiser Turns Off Bad Ad Networks

The client we’ve been discussing throughout this article is a major app with millions of users and large user acquisition budgets. They agreed to follow all the recommendations we advised in our prior blog post. They initially expressed fear of losing installs and cutting off channels with good retention rates. After we assuaged their fears they agreed to follow our advice and closely measure the results.

Our client attacked the problem of organic install poaching by cutting the fingerprint lookback window from 24 hours to our recommendation of 1 hour. This simple action reduced their paid install count by 50%, as shown in Figure 3. Best of all, they found the total install count stayed the exact same. This result demonstrates clear evidence that these bad ad networks had simply been poaching organic installs.

 

Bad Ad P3_3 
Fig 3: After tightening the fingerprinting window, organic installs offset the loss.

Interestingly, the fraudulent ad networks reacted to the shorter fingerprinting window very quickly. These networks were so eager to defraud our client that they started firing a higher volume of fake clicks to better fit within the tighter attribution window. Simply narrowing the attribution window caused the networks to increase their click count 4x. At peak they were driving a daily click count equivalent to 16% of the national population.

 

Bad Ad P3_4 
Fig 4: A bad ad network increases their click count in response to the tighter attribution window.

We responded very quickly to these changes, and recommended additional settings changes that kept the fraudulent clicks under control. Some of the sketchiest affiliate partners were unable to compete in the more restrictive landscape and simply dropped out.

For this client we built the foundation of our “DoubleCheck” anti-fraud dashboard. We examined the many types of fraud they wished to track and generated a comprehensive fraud score for each each network ranging from 0–100. Using this score the client was able to request refunds from shady networks, and received up to a 90% discount on their ad bill. We will be looking under the hood of our DoubleCheck dashboard in a future post, but for now we provide a sneak peek here:

 

Bad Ad P3_5 
Figure 5: The DoubleCheck anti-fraud suite reviews your MMP data and provides you additional anti-fraud metrics.

The clients’ fight may never be complete, but many great tools are now at their disposal to aid them in their battle against fraud.

Conclusion: Fraud is Omnipresent and Adaptive

We argue no advertiser can take a “one-size-fits-all” strategy to fighting fraud. Whenever a bad ad network has a business model based on fraud, they will adapt very quickly whenever their profits are challenged. For this reason, we believe the fight against fraud will be ongoing, requiring constant vigilance by advertisers and their partners.


This is our third and final part in our three part series on bad ad networks. In our prior posts we discussed the scope of ad network fraud and simple checks you can perform to combat the problem. In this post we look at the rapidly evolving future of ad fraud. Please visit http://molocoads.com/ for more information.