‘We needed to do better’: Google’s vice-president of trust and safety speaks out on banning election ads in Canada
|Toronto Star 14 Oct 2019 at 06:23|
As U.S. President Barack Obama’s deputy chief of staff, Kristie Canegallo was charged with maintaining public trust in government efforts such as health care reform and education.
Since 2018, she has been tasked with a similar responsibility at Google. As the company’s vice-president of trust and safety, it’s her job to combat disinformation across its products and services.
Part of that mandate included a decision in March to ban political ads on Google platforms in Canada ahead of the federal election on Oct. 21. That move followed the government’s passing of the Elections Modernization Act, which requires advertising companies to maintain registries of all clients.
In Toronto recently, Canegallo discussed that decision with the Star and how the fight against disinformation is affecting business.
The key issue with the Act was having to maintain a registry of advertisers. Why did Google have a problem with that?
We introduced transparency reports in the U.S. ahead of the mid-terms in 2018, and in India, and the EU elections last year. As the law came into form here, we took a hard look at the requirements and wanted to ensure — if we were going to stay active in the space — that we could comply with the law and provide the transparency on the timeline that was required. We didn’t think we could comply fully, which is why we exited.
Canada is a smaller country, which likely means less ad revenue for Google. How much did that play into the decision?
The conversation was really about the transparency requirements and the operational difficulty. Revenue was not a consideration.
If this was something the U.S. wanted, would you have taken a different approach?
It’s not possible for me to say … the relatively quick time to implement was part of the challenge for us.
Is Google continuing to work on this? Might you take a different approach in the next election?
We are always looking to increase transparency tools across the board. It depends on when the next election would be, and whether the requirements of the current law would be the ones folks would want to implement, or whether there’d be adjustments. It’s our goal to provide transparency in the political advertising space where possible.
Google rejects a lot of ads as “bad.” How do you do that?
We have our policies, which we set so advertisers understand what the expectations are. Then we use a combination of humans and technology to enforce those policies. We take down 6 million bad ads a day, so it wouldn’t be possible for a human to review every individual ad.
For several of our policies, where it would be a relative black-or-white call, we develop technology classifiers that can catch bad ads and then we have edge cases that go to human reviewers.
When we have new ad policies, we calibrate by having individual humans train that classifier. We use a combination of humans and technology to classify bad ads at scale.
We’re also increasingly dedicating resources to going after bad advertisers. In 2017, we took down about 3.2 billion bad ads and about half a million bad advertisers. In 2018, we took down 2.3 billion bad ads, but we doubled the number of bad advertisers. Part of what we’re trying to do is better understand the advertisers, the actor themselves, and take action against them.
How important is it to limit disinformation on a business level?
Disinformation is not new. It’s been around for as long as information has. What’s new is the speed and scale at which it can travel.
As a company whose mission it is to organize the world’s information and make it universally accessible and useful, we view disinformation as directly undermining that. We view it as a significant portion of our responsibility to ensure disinformation can’t be spread through our products.
Why has the public tenor toward “big tech” shifted so dramatically since your time in the Obama administration?
Sign Up Now