Your Data Science team has delivered a model with 99% accuracy. What's the worst that could happen? For blogging platform Medium, the worst came to pass on March 21st 2021 when their recommendation algorithm - whose goal was to provide scalable and personalized article curation for readers - was caught suggesting erotic content to the president of the United States' account.

This is what happened. Medium had somehow added president Joe Biden as a writer on several publications of dubious content. Its recommendation model mistook that for a sign that the US president engaged with the topic and the rest is infamy. At first, the only immediate consequence was likely the exchange of several embarrassing internal emails but, as the news got out, the Medium brand was hit, first by ridicule and then by suspicion about its AI-driven business model.

Cases where a promising machine learning model backfires are not uncommon. It's easy to blame it on the model. But model only knows what it should or shouldn't do based either on the data or on our explicit instructions. In that case, a safe-guard to the model's recommendation was missing.

An offer they will soon not forget

Even if you sit on top of a mountain of data, don't expect it to contain the knowledge of such safe-guards. Imagine, for example, that you are a retailer. You want to use transactional data to delight your customers with personalized offers.

That data won't tell you, for example, that you shouldn't offer alcohol to people whose religion forbids it. Or that maybe you should refrain from offering incontinence diapers to customers who have never bought one in the past.

In 2012, Target learned that it should such restrictions the hard way.

Target's recommendation engine once found out one of their customers was pregnant. The caveat: the customer was a teenager and she didn't know about her pregnancy herself! There were no restrictions coded onto which products the model could offer and to whom it could offer to. So the model suggested pregnancy drugs to the unsuspecting father. Imagine finding out you are going to become a grandfather with a "buy one, get one free" coupon?

Having worked years in retail, where the business goals are ambitious and the margins are thin, I know how tempting is it to jump into badly thought-out AI strategy as a Hail Mary to save the year's P&L. Once I was asked to use credit card information (like card brand and type) together with demographic data and purchase history to infer the wealth of families. The idea was to give different pricing and financing conditions according to the customer's wealth. Fearing a potential PR disaster, I had to dissuade the company leadership from the idea. Saying "no" is an integral part of the Data Scientist job.

Why does it keep happening?

Machine learning projects are prone to failure. They're complicated to put in place and trickier to maintain. Don't believe me? Ask a data scientist if writing a unit test for a machine learning model is easy and watch their reaction.

Yet, a machine learning model that fails to provide the right answer is nowhere near as costly - and avoidable - as a model that delivers the right answer to the wrong (or incomplete) question.

When we want our recommender system to offer the right product to the right person, we often forget to define what "right" means. We may want to exclude prescription drugs and adult content from the "right products", for example. And by right person, we should be extra careful if the president of the United States is a user of our product. The failure to do so can create a situation that spirals into a PR nightmare. Or, as Target discovered, that people get creeped out when their retailer knows about their pregnancies in advance.

This is why all models need to be clear about their restrictions. They need to apply proper "air traffic control".

It's the reason why domain expertise is so valuable in data scientists. It is crucial to understand AI in the context of business, and not only as a scientific project.

Battle-tested data scientists have probably learned that lesson the hard way. But, since applied data science itself is a relatively new field, professionals like these are far and between. In the absence of such professional, it's important they ultimately answer to someone who knows the domain. It's one of the reasons it makes sense to have data teams answering to Marketing, Sales or Product directors.

Not only problem-solvers, problem thinkers

Technical workers tend to start with the technical solution and work backwards from there. If they are not business-oriented, they will focus their effort on improving the model accuracy, instead thinking deeply about the problem the model should be solving.

The assumption, of course, is that whoever handed down the problem must have already thought about it. This premise is seldom true and often dangerous. Poorly defined goals are particularly harmful when it comes to Machine Learning, as one model's output can impacts the data that, in turn, will be ingested by other systems. If there's something wrong with those predictions, the consequences are likely to snowball.

Take algorithms that prioritize "engagement time" for instance. As Facebook and YouTube found out, the most extreme the content, the most likely humans will pay attention to it. Anti-vaxxers, the radicalization of the youth, racial disputes, deepened political divides: the consequences of a poorly defined business metric can be disastrous.

But, for most of the time, you're just losing money

Many times, we've seen machine learning projects start with minimal input from the business teams. These projects lacked an alignment on expectations, goals, and the success criteria. Since the data science team was not clear about what the project's objective was, they never knew whether they were making any real progress on it.

There are three outcomes for these projects: 1) they stay forever in the development stage, 2) they are not accepted by the business teams as the model wasn't able to prove its business case or 3) they are released with deep flaws that can negatively impact the business.

Hiring a data team with business expertise is still a challenge.

AI projects require strategy and technical expertise — but also require a tight partnership with the business team as well as with the senior leadership. It's not enough to know how make a model accuracy go from 81% accuracy to 89%. Data scientists and data engineers must either fully understand the caveats of the business they are in or excel in communication so they can explain their work to the people who do possess such knowledge in simple words.

At DareData, our mission is to give data a purpose by solving a business problem. We proud ourselves to be a data consulting firm with a talent network spanning with business domain on several industries. We are here to help companies with their data science projects - either by sourcing or helping hiring the right talent. If that piqued your interest, we'd love to talk to you.