Tom H. Hayden

Lecturer at Northwestern Medill IMC

After receiving his law degree and passing the bar exam, Tom H. Hayden spent his professional career on the corporate side with several prominent marketing and communications companies. He now teaches both graduate and undergraduate IMC courses in marketing law and data privacy. He also serves as director of the IMC undergraduate certificate program. Tom earned his Bachelor and JD degrees from Saint Louis University.

Tom E. Hayden

Entrepreneur-in-Residence The Garage, Northwestern

Tom E. Hayden was an engineer on the fraud team at Facebook and built out the data infrastructure at GrubHub. He holds a BA in Telecommunications, Information Studies, and Media from Michigan State University, and a Master of Science in Information, Incentive Centered Design from the University of Michigan. Tom was also an NU graduate student in theoretical computer science.

As Marketing Algorithms Proliferate, Marketers’ Proficiency in Law and Technology Must Follow

One of the most successful venture capital strategies over the past decade has been to move tools out of the computer science laboratory and into industry. Companies working on buzzwords like ad tech, fintech, and martech are poised to disrupt their industries, and some already have. In the financial sector, automation via artificial intelligence (AI) has replaced the trading floor with server farms.

Marketing technology is having a similar disruptive impact, forcing firms to re-think how marketing campaigns are developed, implemented, optimized, and scaled. Technology drives automation, thus improving productivity by shifting employees to more effective tasks, and enables scale previously unattainable.

It comes with a dark side, however. If your team does not carefully build and tune the technology, you could be at risk of algorithmic bias. Algorithms are nearly always trained on past behavior, a technique known as supervised learning. Algorithmic bias occurs when an algorithm discriminates against a segment of the population because it improperly over-weights the effect of past behavior.

Think of it like this: suppose you run a marketing campaign targeted towards young affluent men, and the campaign performs well. If you were to build an algorithm to predict who you should target next, it will likely tell you to target young affluent men… again, and again, and again. Even the best AI is simply binary code following a set of instructions. Whether you build the algorithm thoughtfully or not, it will do exactly what you tell it to.

In decision-making, people over-fit data all the time. In the early days at GrubHub, the marketing team regularly designed campaigns to target under-35 urban women because it was the common belief that this was the top customer demographic. Only partially true, it’s likely that GrubHub missed out on good opportunities to connect meaningfully with new customers, while targeting the same demographic over and over again.

Implementing the right technology will allow you to both expand your own team’s customer knowledge base and to implement automation designed to effectively run your campaigns. However, it requires a thorough understanding of the data and its sources, and it requires you to ask the right questions, perhaps even bringing a data scientist and attorney on board to help.

It’s important to ask questions such as: How does this algorithm work? What are some of the specifics we need to know about it? How is it built? What type and how much data usually perform best, and how much did we use? Does the model line up with my expectations? If not, why not?

But answers to these questions may or may not uncover the legal or regulatory issues that will potentially accompany your use of the data. Accurate data is not immune to bias, particularly when the data has been aggregated from several sources. Data gathered and shared among multiple sources can be particularly problematic, sometimes leading to unpredictable or even discriminatory outcomes. Who among us can forget the New York Times story about Target’s ability to accurately predict pregnancy?

In most fields today the best technological and legally compliant AI implementations are the ones where decisions are made by people, augmented by the use of artificial intelligence. As we have seen, decisions made without the application of human judgment can lead to unexpected or even disastrous results. And historically, the regulators do not hold the machines responsible when they generate conclusions that prove to be harmful. When the systems themselves and the conclusions they generate cannot be properly explained, the regulators will come looking for you.

Thus, with the adoption of technologies like machine learning and artificial intelligence, it is no longer enough for IMC practitioners to simply be “marketers.” They will need to integrate technology and law into their practice and processes. Successful IMC professionals will need to understand how the technology tools being used in digital marketing communications actually work, especially when those tools are needed to provide better, deeper, and bias-free insights into consumer behavior. They will also need to understand the legal and regulatory requirements pertaining to the use of the data that feed and drive these new technologies.

A basic proficiency in technology and law is not on the radar screens of most IMCers. But it should be if you want your tech tools to provide the bias-free, legally compliant output necessary to grow your business.

Written by Tom H. Hayden Lecturer at Northwestern Medill IMC and Tom E. Hayden, Entrepreneur-in-Residence The Garage, Northwestern

Edited by Benjamin Mandel, Medill IMC Class Of 2018

© 2020 Northwestern University

1845 Sheridan Road
Evanston, IL 60208-2101