19 Mar 2018

You’ve been harvested: Why travel marketers should be worried about Cambridge Analytica

This is a new weekly-ish column from the editorial staff that will cover current events from a direct perspective based on our team’s expertise in travel, hospitality, and technology. 

In case you missed it last week, UK firm Cambridge Analytica was accused of unauthorized scraping 50 million user profiles from Facebook. The furor has rightly been extreme, reaching far beyond the United States into Europe.

Beyond the political bent, the inescapable core truth of this is that one company abused a platform created by another company to harvest private data from people who didn’t opt-in to sharing their data. Yes, some of this data was set to public by users who either didn’t know how to set their settings private, who didn’t want to or potentially weren’t even given the ability to opt out by Facebook.

For the travel industry, this global kerfuffle means that the data privacy landscape has shifted far beyond simply “being ready for GDPR.” This is a watershed that is likely going to lead to some major lawsuits. Already Facebook’s stock is suffering, especially after media outlets reveal step-by-step how Cambridge Analytica managed to scrape this much user data without being stopped.

Everyone needs to take time today to review organizational data collection policies. By the end of this week, you should know exactly what information is captured at each stage of your customer journey. And then you can know where, when and how you are vulnerable — and if you are doing right by your customers. Here’s why.

Do the right thing

First off, you must be extremely careful about how and where you are scraping data for users. The easiest test is to consider how you would feel if a company was doing that practice to you or your family. The ‘sketch test’ is a simple and straightforward way to do the right thing.

Second, when you screw up, say you’re sorry. Even after being informed of the violation back in 2016, Facebook still hasn’t notified any of the 30 to 50 million users that Cambridge Analytica allegedly received scraped data from. How is this even possible? In what world is it ok that a company knowingly refuses to inform people that their privacy rules — as defined by the platform itself — have been violated?

Facebook’s tepid response is couched in language that contradicts the actual events. Paul Grewal, deputy general counsel at Facebook, is ignorant of just how tone deaf the company has become to a reality that we all face — a reality of excessive intrusions into our psyches that manipulate the basest instincts in humanity. He said:

“Protecting people’s information is at the heart of everything we do. No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”

So if protecting user information is at the heart of Facebook, then Facebook itself has been violated at its core. The heart has been pierced — and Facebook’s response is not strong enough to suggest that the company actually understands how major this violation is. If my heart was pierced, you’d better believe I would be more adamant about pursuing my attacker!

And also — how in the world can the company claim that no “sensitive pieces of information” were stolen? Wow. Just in case you missed it, here are the qualities that Cambridge Analytica was provided by researcher Aleksander Kogan that told Facebook all data collection was being done for non-commercial research:

Email copy received by the New York Times.

I may be a sensitive Millennial, but I’m pretty sure the majority will agree that this information is most definitely ‘sensitive.’ How often do you go around broadcasting your IQ? Or how satisfied you are with life? Or how neurotic you are?

It’s not quantitative info like age and gender that’s scary — it’s the deep psychographic profiling that this company managed to compile from scraping information that was not theirs to scrape in the first place. They did not own this information, they did not have rights to this information, and what they did definitely violated Facebook’s policies — and might have even been illegal in some jurisdictions.

When social media is under attack, maybe don’t take to social media to whine?

One Facebook executive — none other than the company’s chief of security Alex Stamos — said this in a series of now-deleted tweets: “The recent Cambridge Analytica stories by the NY Times and The Guardian are important and powerful, but it is incorrect to call this a ‘breach’ under any reasonable definition of the term.”

Most users do not care if this is a “reasonable” data breach, as defined by an intruder stealing information. It’s a breach of trust, regardless of whether a system was breached in the traditional sense of the word. It’s a breach of trust that Facebook will not allow third-party apps unfettered access to profiles that had no chance to opt-in. It’s a breach of trust that Facebook chose to allow these bad actors to remain on its platform — Facebook did nothing to protect its users, and then acts shocked that users are bothered.

Just look at another tweet below from a Facebook exec, showing the shared disease of a tone-deaf tech response that, at it’s best, is not winning the company any hearts or minds. At its worst, it’s deeply alienating and shockingly out of touch with how “normal” people feel about the global backlash against this unauthorized breach of the platform’s privacy protections.

The lesson here is two-fold: first, do the right thing. And then, if that fails, don’t be tone deaf if you either thought you were doing the right thing and weren’t, or clearly did the wrong thing and are trying to make it right. This is going down as a case study in ‘how not to respond to a data crisis.’

Vet your partners

Facebook should have done a better job at understanding how its partners were using its data, and if they were abiding by the network’s data collection rules. Facebook did not vet Aleksander Kogan, the researcher who is purported to have provided the vast majority of the unauthorized user data. The company had a program for academics, but it did not back that program with a functional vetting process.

If you are going to bring a partner into your core business — in this case, Facebook’s core business is monetizing its user data — then you need to vet them clearly, carefully, and consistently. A fundamental misunderstanding of what’s important to Facebook — the integrity of its user data — has led the company to a massive crisis.

“We wanted as much as we could get,” said Cambridge Analytica co-founder Christopher Wylie. “Where it came from, who said we could have it — we weren’t really asking.”

If you don’t vet your partners, you are asking for trouble. Due diligence up front is far more effective in the long term than dealing with the fallout in the future. It might seem annoying, especially if you are a tech company whose motto once was “move fast and break things.” But sometimes the only thing that you’ll break will be your own business. Protect yourself from bad actors by vetting your vendors rigorously!

And this extends out to any partners that your brand might engage with. Check out Channel 4’s investigation into Cambridge Analytica, using a fake client to get the company’s CEO to share some ethically dubious strategies with an undercover journalist. Whether it’s a potential client or a partnership, due diligence is de rigueur. The more intertwined the world is, the more liability brands open themselves up to with each partner relationship. Be wary, be smart, and do the work upfront to stay in the clear.

Here, let me help: here’s a video of Cambridge Analytica’s Chief Data Officer at ITB Berlin this year, discussing “the future of personalized advertising.” When you hear a company make those claims, it’s time to ask: where did you get your data — and show me the proof that you had permission to use said data!

When in doubt, ask

This applies to both vendors and users. If you don’t know where a vendor gets its data, ask. If you don’t understand the answer, ask again. If the vendor is unable to provide a clear, easily understandable answer, then look for another vendor. And yes, this could include Facebook. With its new Trip Consideration feature, travel brands must be crystal-clear that there are no data liabilities baked within any data-driven personalization ad product.

You are not innocent just because you don’t know where the data comes from! Remember that willful ignorance is not a defense in many cases. And even if you don’t run afoul of the government, user trust is a tough thing to regain.

So this means you should also ask your users for permission. Wherever possible, ask your users to opt-in and offer customizable settings for communications, personalization and data sharing. By erring towards the side of caution (or simply caring more about your user than you currently do), you will not only protect yourself but make a better product.

Put the user at the heart of everything you do, and you can’t go wrong. Stop being creepy, start being thoughtful, and let’s move forward to a world where users don’t have to worry about being psychographically profiled down to their own “neuroticism.”

If brands don’t answer this wake-up call, a user revolt is coming. And that data spigot is going to be regulated so much that the promising future of data-driven personalization will fade away.

The stakes are high — are you prepared to do what needs to be done?

Opinions and views expressed here do not necessarily reflect those of tnooz, its writers, or its partners.

Photo by Bernard Hermant on Unsplash