Why we should hope that corporate claims about the ethics of facial recognition are pure marketing

Dorothea Baur
5 min readFeb 15, 2020
Photo by 🇨🇭 Claudio Schwarz | @purzlbaum on Unsplash

On January 15 2020, Meredith Whittaker, Cofounder of the AI Now Institute, gave a testimony, entitled “Facial Recognition Technology (Part III): Ensuring Commercial Transparency & Accuracy”, at the US House of Representatives Committee on Oversight and Reform. The testimony provides excellent insight into how facial recognition technology can exacerbate inequality and discrimination, how it can be used as a starting point for further, worrying applications such as ‘emotion recognition’, how technical fixes are insufficient to solve the problems with facial recognition and how in light of all of this it is time to “halt the use of facial recognition in sensitive social and political contexts, by both government and private actors”. Because, says Whittaker,

“facial recognition poses an existential threat to democracy and liberty, and fundamentally shifts the balance of power between those using facial recognition and the populations on whom it’s applied”.

Upon reading this brilliant piece of work, one passage struck a chord with me which led me back to my time as a lecturer on business ethics. At one point (p.9), Whittaker refers to the fact that some companies, including Amazon, claim their facial recognition systemsdid not exhibit the racial, gender, and other biases found in similar systems” without ever substantiating this claim, or subjecting their systems to external audits, e.g. by NIST (National Institute of Standards and Technology).

Whittaker argues that “(i)n such cases, truth-in-advertising laws applied to AI companies would be helpful, holding companies liable for misrepresentations made in marketing, and giving the Federal Trade Commission or other designated agencies leverage for enforcement”. This sounds very plausible and desirable. But there’s a catch, namely: are these statements about ‘bias-free’ facial recognition tech really ‘only’ commercial speech and thus subject to advertising laws? Don’t they address fundamental public concerns, which by no means can be compared with claims about the brightness achieved by a washing detergent?

I remembered a case that I found in one of my business ethics textbooks, which took me a while to understand because it operates at the intersection of ethics, business and law. It’s the so-called Kasky vs. Nike case. In the 1990ies, Nike came under massive pressure when it became public that much of their footwear and apparel was produced under highly exploitative working conditions in so called sweatshops in South East Asia. The pressure became so strong that in 1998, then-Chairman and CEO Phil Knight admitted that “Nike products have become synonymous with slave wages, forced overtime, and arbitrary abuse”.

Unsurprisingly, Nike rebutted allegations in public statements. This led Californian consumer activist Mark Kasky to sue Nike. Kasky quoted evidence that Nike’s rebuttal was not true and accused Nike of violating truthfulness in advertising laws with their claims. He argued that “Nike’s statements were no different from ordinary commercial advertising, warranting only the lower degree of protection appropriate to commercial speech”.

Nike, unsurprisingly, did not accept these accusations. They said claimed that their “responses to accusations of sweatshop labor” were not a matter of commercial advertising but “addressed debates about globalization that were matters of ‘public concern”. Therefore, they “warranted the highest level of First Amendment protection”, i.e. the protection granted to free speech of citizens, be it human or ‘corporate citizens’”.

The trial court and the California Court of Appeals dismissed Kasky’s claim. Only when Kasky appealed to the California Supreme Court, the lower courts’ rulings were reversed. The court held that “Nike’s statements were commercial speech” and thus warrant less constitutional protection than non-commercial speech. Finally, Nike appealed to the US Supreme Court. Yet, instead of issuing the expected “landmark ruling on the free speech rights of corporations,” the Supreme Court “dismissed certiorari as improvidently granted” (sorry, that’s total legalese. I recommend you check out Wikipedia).

Now, what does this have to do with facial recognition technology and the call to hold companies liable for misrepresentations made in marketing when they make unfounded (or even false) claims about the ‘absence of bias’ in their software?

Just like Nike’s assertions about the labor conditions in their sweatshops, the claims tech company make about ‘bias-free’ facial recognition tech could be understood as commercial or non-commercial.

Whittaker’s testimony powerfully illustrates the vast political dimensions of facial recognition technology, i.e. the threat it poses to our civil rights. Thus, shouldn’t we see facial recognition technology companies as political actors?

There are reasons to in fact see all corporations as political actors. But in the case of facial recognition tech companies, it seems particularly straightforward. By selling products with a direct impact on our liberty, these companies are interfering with democracy.

Pretending they are ‘just doing business’ means to downplay the ramifications of their activities.

Yet, we could also say: yes, let this be commercial speech — this whole ‘AIethics talk’ is pure marketing strategy anyway (cue ethics washing); therefore, it should be subject to advertising laws. Just like a typical fast food company cannot claim that their food is good for your health, or would be held liable for this claim if they did, a tech company must not deceive you into thinking you are buying and using — or being subject to! — a ‘fair facial recognition technology’ if this is not true. So, yes, acknowledging these statements as purely commercial claims is not just a means to an end to protect us but it also ‘sees them as what they are’, namely pure marketing.

And this is where the First Amendment comes into play again: remember, the First Amendment grants corporations the right to free speech, but ‘only’ as long as it is non-commercial (read caveat at the end please). As soon as it is commercial, it is subject to truth-in-advertising laws.

So, paradoxically from the point of view of citizens concerned about the political dimensions of facial recognition, we want the claims about vast ethical and political dimensions of this technology to be commercial because this means that we can penalize companies for misinformation. If we acknowledge the non-commercial, or rather the effectively ‘far-beyond-commercial’ dimensions of facial recognition technology, we in fact severely restrict our abilities to demand accountability for the claims regarding their ‘ethical qualities’.

What do we learn from that? While so much of the ethical challenges posed by AI seems to pose unprecedented challenges, a substantive part of it is mere history repeating. Even if Whittaker’s call to apply truth-in-advertising laws to AI products and services is heard, we must face the possibility that companies claim that they were only exercising their right to non-commercial speech and thus deserve strong constitutional protection. What makes the ethical challenges posed by AI so distinctive is their scale and their speed. This means it’s high time to get going — for politicians, courts, but first of all for us as citizens by insisting on opening those corporate black boxes. Because transparency is precondition for accountability.

Caveat: I am not a lawyer. I am aware that I am really going out on a limb here. But I used to tell my students the most important thing was to ‘dare to be wise’. So, take this as a modest — or should I say reckless? — attempt at practicing what I preach.

--

--

Dorothea Baur

Ethics in #tech & #finance: #AIethics, #ESG, #CSR, #sustainability. Expert, consultant, public speaker, lecturer, author. #100BrilliantWomeninAI 2020.