Proactive Ad Fraud Prevention With Artificial Intelligence

As marketers grapple with the problem of ad fraud and its mounting losses, artificial intelligence (AI) is proving to be an effective weapon that can reverse the tide.

Marketers in Asia Pacific continue to throw money at advertising, as ad spending is expected to increase 10.7 percent to US$210.43 billion in 2018, according to eMarketer. However, the ever-growing problem of ad fraud is skewing their reporting and standing in their way of showing better returns.

Even mobile marketers who expected more safety with app installs, faced 30 percent more fraud during the first quarter of 2018 compared to the same period last year, according to AppsFlyer’s “The State of Mobile Fraud: Q1 2018” study. Mobile app marketers were exposed to US$700-US$800 million in ad fraud losses worldwide. What makes ad fraud such a challenging problem today?

More Sophisticated Ad Fraud Methods Today

In the early days of ad fraud, the methods adopted by fraudsters were relatively simple. They used bots focused on driving large volumes of traffic to websites, bought cheap traffic through auto redirects or employed people to install apps in click farms. Once a click was made or an app was installed, their job was done. However, this was soon caught by advertisers and their focus began shifting to examining post-install quality, engagement, last-click attribution and return on investment (ROI).

Today, ad fraud has evolved into a completely different beast. For example, in a technique called ‘click injection’, they try to steal credit for app installs by triggering a click right before these apps get installed. Or, they stack up ads on top of each other to generate impressions for multiple ads.

Fraudsters also realize that to remain undetected, they not only need to drive traffic to websites or generate large amount of app installs, but also remain engaged subsequently and mimic human behavior. They even employ human workers in click-farms to imitate real human interactions, develop apps that hijack devices to generate additional clicks and create simulators that generate fake installs from bot networks.

Brands Turn to Technology for Help

Big brands and major publishers have begun to act against ad fraud. The Guardian recently collaborated with Google and MightyHive, a programmatic solution provider, to investigate programmatic fraud. Adobe Advertising Cloud has partnered with cybersecurity firm White Ops to tackle the problem in the streaming TV media.

Another technology that is gaining traction among marketers is blockchain, thanks to its ability to enforce decentralized monitoring and independent verification. However, there are still many challenges to be addressed, like handling the massive transaction volumes involved in real-time bidding and getting universal acceptance from everyone involved.

Joe Su, Chief Technology Officer and Co-founder of Appier, explains, “I think it will be a while before everyone in the supply chain – from media buyers to ad exchanges and publishers – cooperates to opt into a universal standard to make it feasible.”

Sophisticated Detection Powered by Machine Learning

Traditional approaches in fraud detection rely on simple rules created by humans through the measurement of three signals (metrics like conversion rate and click time) or dimensions at most. For example, IP blacklists to block suspicious traffic and filter out installs with low click-to-install-time (CTIT) have worked in the past.

However, the problem with these fixed rules is that they are pre-defined, as advertisers approach detection knowing what they are looking for. For example, one might decide to exclude app installs with CTIT below 10 seconds, due to the higher likelihood of bot-operated installs in these cases. But if such rules are fixed ahead of time, it’s only a matter of time before fraudsters figure out ways to circumvent them.

A more effective solution is to leverage technology that can keep pace with, and more importantly, stay ahead of today’s ad fraud techniques. That means marketers need to go beyond simple, fixed rule-based criteria, towards AI-powered solutions that are capable of learning new fraud patterns and refining the rules on their own.

As fraudsters employ new techniques that are capable of mimicking human behavior, these machine learning algorithms can help marketers look for fraudulent behavior not immediately evident to the human eye. This is especially critical in the case of app installs, where detecting fraudulent clicks and impressions before install becomes paramount.

“Looking for signals like suspiciously low time between clicks and installs or CTIT can indicate fraud, but by the time the install has occurred, it would be too late and an attribution for the install has already been counted,” said Su.

One foundational AI-approach, called the “tree-based model” works by analyzing a massive number of signals to achieve maximum coverage and accuracy in detecting outlier behavior. Consider the case of “the chameleon”, where fraudsters mimic legitimate publishers and generate installs at a later date, when the natural user retention is expected to drop.

Another scenario is that of an “inventory burst” where inventory count from a suspicious publisher spikes at a time when generally the in-app registration falls. As machine learning algorithms learn from gathered data over time, both these sophisticated ad patterns can be detected and fed back into the filters for improved detection in the future.

By detecting more cases of ad fraud, marketers can weed out poor quality traffic and measure their returns on advertising spend (ROAS) more accurately. In a study of 5.2 billion data points from mobile app campaigns in the region, Appier detected twice the number of suspicious installs using an AI-based approach and realized 4 percent more ROAS, compared to a traditional approach.

It won’t be long before fraudsters develop even more novel techniques to try and escape detection. Advertisers who are vigilant and proactive in preventing this with AI-based fraud detection will be in a better place than their competitors to reap the benefits of their investment.

How Far Are We From Explainable Artificial Intelligence?

Artificial intelligence (AI) is heralding a revolution in how we interact with technology. Its capabilities have changed how we work, travel, play and live. But this is just the beginning.

The next step is explainable AI (XAI), a form of AI whose actions are more easily understood by humans. So how does it work? Why do we need it? How will it forever change the way industries – especially in marketing – function?

The Mystery of the Black Box: The Problem With Current AI

No one would deny that artificial intelligence produces amazing results. Computers that can not only process vast amounts of data in seconds, but also learn, decide and act on their own have turned many industries on their heads – according to PricewaterhouseCoopers, the market worth of AI is around US$15 trillion. However, in its current form, AI does have one major weakness: explanation.

Namely, it can’t explain its decisions and actions to humans. This is sometimes referred to as the “black box” in machine learning – for example, the calculations and decisions are carried out behind the scenes with no rationale given as to why the AI arrived at that decision.

Why is this a problem? It doesn’t engender trust in the AI, which in turn raises doubt about its actions. Explainable AI is expected to solve that.

How XAI Works

XAI is much more transparent. The human actors interacting with the AI are informed not only of what decisions it reached and actions it will take, but how it came to those conclusions based on the available data. It aims to do this while maintaining a high level of learning performance.

Current AI takes data into its machine learning process and produces a learned function, leaving the user with a number of questions such as: Why did it do that? Why didn’t it do something else? When will it succeed? And when fail? How can I trust it? And how do I correct an error?

By contrast, XAI uses a new machine learning process to produce an explainable model with an explainable interface. This should answer all the questions above.

This carries its own risks. Any decision made by an AI is only as good as the data used to make it. While XAI increases trust in the decision made, that trust could be misplaced if the data is unreliable.

Another problem is how well the AI explains its decisions. If it is not comprehensible to the user – who could be a lay person with no technical background – the explanation will be worthless. Solving this will involve scientists working with UI experts, along with complex work on the psychology of explanation.

Risk, Trust and Regulation: Why We Need XAI

In so-called “big ticket” decisions like military, finance, safety critical systems in autonomous vehicles and diagnostic decisions in healthcare, the risk factor is high. Hence it is crucial that the AI explains its decisions in order to boost trust and confidence in its ability. However, there are a host of benefits for businesses in other industries.

XAI can address pressures like regulation, as it will enable full transparency in case of an audit. It will encourage best practice and ethics by explaining why each decision is the right one morally, socially and financially. It will also reinforce confidence in the business, which will reassure shareholders.

It will also put businesses in a stronger position to foster innovation, as the more advanced the AI, the more capable it is in terms of innovative uses and new abilities. Interacting with AIs will soon be standard business practice in many industries, including marketing. Hence it is vital that users can do so comfortably and with confidence.

Experts think this will empower marketers, effectively turning AI into a co-worker rather than a tool.

“In order to trust AI, people need to know what the AI is doing,” says Hsuan-Tien Lin, Chief Data Scientist, Appier. “Much like how AlphaGo is showing us new insights on how to play the board game Go, explainable AI could show marketers new insights on how to conduct marketing. For instance, AI can reach the right audience at the right time now, but if future XAI can explain this decision to humans, it would help marketers understand their audience more deeply and plan for better marketing strategies.”

It could also usher in a new way of working, with marketers accepting or rejecting XAI’s explainable suggestions with reasons in order to help the AI learn. “Today, it is likely that many great suggestions are rejected because they are not explained, and so humans overlook their power,” says Min Sun, Chief AI Scientist, Appier. However, these days could soon be over…

The Defense Advanced Research Projects Agency is currently running an XAI program until 2021. The program is expected to enable “third-wave AI systems”, where machines can build underlying explanatory models to describe real-world phenomena based on their understanding of the context and operating environment. Other experts also predict XAI will become a reality within three to five years.

XAI is no doubt the next step for AI, improving trust, confidence and transparency. Businesses would be wise not to overlook its potential.

Is Artificial Intelligence the Remedy for Brand Safety Woes?

When it comes to programmatic advertising today, companies are focusing on brand safety as much as they are on impressions, click-throughs and revenue generation, but it is virtually impossible to monitor brand safety due to the scale and speed that programmatic offers.

However, with the increasing adoption of artificial intelligence (AI) in digital marketing, such technology will not only help marketers better target their ideal audience, it might just be the cure for protecting advertiser dollars.

Programmatic Could Compromise Your Brand Safety

Brand safety came under the spotlight following a number of advertising mishaps in early 2017. Alexi Mostrous from The Times revealed that many household brands were unwittingly supporting terrorism on YouTube by placing their ads on hate and Islamic state videos. Later the same year, ads from some of the world’s biggest brands were seen to be running alongside videos that sexually exploited children. This led to widespread panic, with many brands pulling programmatic spend until they could be assured by publishers like Google that measures were being taken to filter out such content.

In 2018, brand safety has broadened to cover any offensive, illegal or inappropriate content that appears next to a brand’s assets, thus threatening its reputation and image. This could include controversial news stories or opinion pieces, as well as fake news or content that is not aligned with a brand’s values – for example, a fast food company’s ad appearing next to an article about heart disease.  

In a recent survey, 72 percent of marketers stated that they were concerned about brand safety when it came to programmatic. Also, more than a quarter of respondents claimed that their ads had at some point been displayed alongside controversial content.

Why are brands running scared? Because this definitely has an impact on consumer psyche. Nearly half of consumers are unequivocal about boycotting products that advertise alongside offensive content, and an additional 38 percent report a loss of trust in such brands.

To be fair to brands, they are not choosing to support such content.

Traditional Techniques Come With Limitations

Brands understand the damage that offensive content could cause their image, but it is not feasible for them to implement customized brand safety measures across each ad placement.

Digital advertisers do not have direct relationships with publishers. Ad exchanges receive inventories from thousands of websites that are auctioned off within milliseconds, on the basis of demographics, domain and size of ad. Hence, there are no checks for context or appropriateness, only audience relevance.  

While the explosion of programmatic may be offering more opportunities to reach the right audience, the sheer volume also makes it difficult to monitor.   

Of course, there are some topic areas that no brand will advertise around – terrorism, pornography, violence, etc. And brands can stay away from content around these by using blacklists, whitelists and keyword searching. However, these have their own limitations.

A blacklist, for example, details individual words that a brand does not want to be associated with – but this ignores nuances and context, letting some ads slip through the net, or blocking placements that may, in fact, be safe.

Finally, safety is subjective. A washing machine brand will have nothing to lose by advertising next to content on prevention of tooth decay, but this could be problematic for a biscuit or chocolate brand.   

In the long term, such techniques that are nuance-agnostic cannot completely assure brand safety. Neither can manual methods and checks keep up with the volume, scale and speed that are characteristic of programmatic today.

AI Introduces Context to Content

In this context, artificial intelligence, which offers a solution through algorithms that can understand nuance and context, is fast becoming an answer to marketers’ brand safety woes.

Although AI solutions might not be able to eliminate false positives or avoid the damage entirely, such solutions, specifically those that use machine learning (ML), natural language processing (NLP) and semantic analysis, can offer the nuanced contextualisation that programmatic is lacking today.

ML can ‘learn’ how people approve or blacklist content and then use this to automatically deem content as appropriate or offensive. NLP and semantic analysis assess brand safety at a granular level by understanding the context of a page rather than only look at the keywords or domain name.

Using AI tools that can process large volumes of data at speed to analyze inappropriate placements, advertisers can benefit from the scale and targeting efficiency of programmatic, while avoiding potentially damaging ad placements. Simultaneously, AI can also unlock the potential that false positives undermine by recommending content that brands would otherwise be blind to.

Post the YouTube-debacle, Google confirmed that it was using AI to make YouTube content safe for brands, stating that using ML allowed it to flag offensive content more efficiently and faster than manual methods.

Also, when it comes to brand safety, post-campaign analysis will simply not cut it. Brands have to combine programmatic tools with AI to ensure that the ad placements they are bidding for do not contain content inappropriate to the brand message.

Last but not least, brands should take note that AI tools are only as good as the rules that drive them. Hence, brands must first understand safety within their own context, and what they deem appropriate or offensive. Brand safety rules need to be re-examined periodically as context evolves so they cannot completely do away with human intervention, but AI can help to deal with the sheer volume and scale at which brands operate today.

Are Data Scientists Evolving With the Rise of Artificial Intelligence?

As developments in machine learning (ML) are expected to progress at a phenomenal pace, it is set to become one of the most powerful tools for businesses to enhance productivity and drive innovation. While ML, one of the most popular artificial intelligence (AI) applications, holds a lot of promise for businesses, is the role of data scientist today already evolving in order to keep up with the change?

What Is Next in AI

Continued advances in AI will see autonomous systems perceive, learn, decide, and act on their own, but to ensure the effectiveness of these systems, the machine will need to be able to explain their decisions and actions to humans. This is so called explainable AI.

“In the future, many AI systems are going to interact with people, especially those who will take responsibilities, hence the reason why AI needs to be explainable, meaning that the behavior of the system needs to be easily expected and interpreted by people,” said Min Sun, Chief AI Scientist at Appier.

Sun also pointed out that in the future, AI is going to be less supervised, which means that it will require less human inputs, and be more creative.

Data science was previously concerned with time-consuming ML tasks, such as data wrangling and feature engineering, which could take up 80 percent of data scientist’s time, but such tasks can be automated sooner or later, according to Deloitte’s Technology, Media and Telecommunications Predictions 2018 report.

Such advances in AI will give data scientists more time to execute more complex tasks. However, it brings up a problem: a majority of data scientists doesn’t possess the required advanced machine learning skills, such as deep learning (DL), a subfield of ML.

The Impact of Machine Learning on Businesses

Previously, companies might have spent a lot of time doing guesswork based on consumer data gathered online and offline, which is usually fragmented and siloed. With an AI-based approach, brands are able to unify data across different channels for a holistic view and analysis of the audience and their conversion journey.

Machine learning and deep learning allow a computer to take in huge sets of data and not only predict the outcome, but also understand what the desired output should be. It can be integrated into many aspects of digital marketing, such as predicting consumer behavior and campaign outcomes, marketing automation, sophisticated buyer segmentation and sales forecasting.

With these technologies, businesses have a more efficient and cost-effective way to build trustworthy AI systems to be used by professionals and/or to be naturally interacted with human users, according to Hsuan-Tien Lin, Appier’s Chief Data Scientist.

So, it’s no surprise to see that businesses are increasingly catching up on the adoption of AI technology. According to the International Data Corporation (IDC), AI continues to be a key spending area for companies in the near future, with worldwide spending on cognitive and AI systems increasing 54.2 percent in 2018 to US$19.1 billion. That number could go up to US$52.2 billion in 2021, IDC predicted.

Bridging the Machine Learning Skills Gap

As more businesses look to adopt AI techniques like machine learning and deep learning, data scientists are urged to upskill, in order to keep up with the current trends. Rudina Seseri, Founder and Managing Partner at Glasswing Ventures, wrote in Forbes, “Data scientists – at least the successful ones – will evolve from their current roles to becoming machine learning experts or some other new category of expertise, yet to be given a name”.

Leading tech companies such as Google and Microsoft have already been offering relevant courses aiming to help bridge the talent gap. For example, Google not only made its ‘Machine Learning Crash Course’ available to the general public earlier this year as part of the company’s ‘Learn With Google AI’ initiative, it has also launched a machine learning specialization on Coursera, an online learning platform.

Andrew Ng, one of the world’s best-known AI experts, also launched a set of courses on deep learning through Coursera in 2017, hoping to help more people get up to speed on key developments in AI.

While technical skills will be foundation of the role of data scientists, it’s crucial for them to master human-centric skills too. Data scientists will need to develop a better understanding of the overarching business strategy and business challenges in real-world scenarios, in order to create solutions that can solve real problems.

Businesses are looking for a total solution, Sun pointed out. For instance, self-driving car manufacturers need a system consisting of perception, communication, decision-making and control. In the old days, each module was designed separately, but this has been transitioning to more jointly design since the fatal self-driving Uber crash, where the perception system identified the pedestrian, but the decision-making module failed to react.

The ability for scientists to design a complete system consisting of multiple ML modules will become more and more important,” he said. “In the future, data scientists will need to have the modeling and analysis skills at the system-level to provide business people with the right total solution to the market.”

Food for Thought: 10 AI Quotes You Should Read

Artificial Intelligence (AI) is no longer just the domain of sci-fi fans or tech nerds, as it’s becoming a key pillar of how we do business and live our future lives. While we still need time to see how far this prodigious technology will go, AI has always been a subject of interest to some of the brightest minds and well-known personalities. Here are some of their thoughts on AI.

AI 101: Deep Learning

Imagine that you are a marketer looking to run a targeted marketing campaign. What if you had a tool that could easily segment your market on the basis of factors like economic status, purchasing preferences, online shopping behavior, etc. so that you could customize your approach and messaging to each segment for maximum impact and conversion?

These are the kind of insights that deep learning (DL)* can offer.   

DL refers to a family of advanced neural networks that mimic the way the brain processes information and extract goal-oriented models from scattered and abstract data. What differentiates it from traditional machine learning is the use of multiple layers of neurons to digest the information.  

A DL program trains a computer to perform human-like tasks, such as speech recognition or predicting consumer behaviors. It is fed large amounts of data and taught what the desired output should be. The more data it’s fed, the better performance.

The program then applies calculations to achieve that output, modifying calculations and repeating the cycle until the desired outcome is achieved. The ‘deep’, hence refers to the number of processing layers that the data must pass through to achieve the outcome, and how the learning algorithms are stacked in a complex, hierarchical manner. The more levels or layers there are, the ‘deeper’ the learning.

DL can analyze huge volumes of data to detect patterns and predict trends and outcomes. This is especially interesting to marketers, finding application in predicting consumer behavior and campaign outcomes, marketing automation, sophisticated buyer segmentation and sales forecasting, to name a few use cases.

*Deep learning is not magic, but it is great at finding patterns.