It took me a couple of days to realize what that sudden Clubhouse buzz in my Twitter feed was about. I was just recovering from the shock I had when I realized that the messaging service Signal, which has been hyped as a privacy-friendly alternative to Whatsapp, notified anyone who had my phone number in their contacts about my joining (if they had activated that notification option).
I had hardly come to terms with this worrying feature of Signal, when Clubhouse started to trend across my social media feeds. After its launch in Germany at the beginning of this year…
Everyone concerned with ethics is probably used to cringing while reading news on AI. For me, one of the most reliable triggers for such cringing is the talk about ‘democratizing AI’; and I am not the only one.
Take this: Selling video analytics as “surveillance in a box” is said to democratize high-powered surveillance. Why democratize? Since buying the software is much cheaper than hiring video analysts, surveillance becomes accessible to and affordable for a much larger customer base. And this is what is meant by ‘democratizing AI’: making AI accessible to those who want it. If we follow this…
Bias, discrimination, privacy violations, lack of accountability — AI entails a lot of ethical problems. Hyping AI creates additional ethical challenges on top of the existing ones. Here is how:
1. We do not need AI for everything
The AI hype epitomizes the belief that we need AI for everything, or at least that AI makes always sense. It does not question the very purpose of AI.
Yet, there is a lot of what I call ‘AI for nonsense’. ‘AI for nonsense’ is an antithesis to the ‘AI for Good’ rhetoric. AI for good strives to serve a meaningful purpose…
On January 15 2020, Meredith Whittaker, Cofounder of the AI Now Institute, gave a testimony, entitled “Facial Recognition Technology (Part III): Ensuring Commercial Transparency & Accuracy”, at the US House of Representatives Committee on Oversight and Reform. The testimony provides excellent insight into how facial recognition technology can exacerbate inequality and discrimination, how it can be used as a starting point for further, worrying applications such as ‘emotion recognition’, how technical fixes are insufficient to solve the problems with facial recognition and how in light of all of this it is time to “halt the use of facial recognition in…
Facial recognition has come under massive scrutiny, not least since its live variant has come to attention. Approaches to using it are quite divided. While China uses the technology routinely and extensively in order to surveil their citizens’ everyday life; San Francisco, notably the ‘home territory’ of those companies driving the development of this type of technology, has banned it last spring.
Somewhat surprisingly to any critical observer of the debate, the London Metropolitan Police has announced recently that it would use live facial recognition cameras on London streets, citing “a duty (sic!) to use new technologies to keep people…
A few weeks ago, the San Francisco based think tank Open AI published a paper titled “AI Safety Needs Social Scientists”.
The paper describes a new approach to aligning AI with human values, i.e. ensuring that AI systems reliably do what humans want, by treating it as a learning problem. AI alignment is acknowledged as a key issue to ensure AI safety. At the paper’s core is the description of an experimental setting aiming at reasoning-oriented alignment via debate, in which humans that are assigned specific roles (e.g. …
Beim Lesen des ausgezeichneten Berichts des Council of Europe zu «Diskriminierung, KI und algorithmischen Entscheiden» (auf englisch) fragte ich mich, inwiefern algorithmische Entscheidungen dazu beitragen können, Diskriminierung in bereits stark gespaltenen Gesellschaften zu verstärken.
Zu meiner Überraschung katapultierte der Bericht mich beinahe 20 Jahre zurück in die Zeit, als ich meine Masterarbeit zu «Präsidialsystemen in afrikanischen Staaten» schrieb. Einer der Hauptfaktoren, der meiner Meinung nach beeinflusste, inwiefern sich ein solches System für einen afrikanischen Staat eignete, waren die sogenannten ‘cleavages’, zu deutsch: Konfliktlinien, in einer Gesellschaft. In den Politikwissenschaften bezeichnet ‘cleavage’ die Unterteilung einer Gemeinschaft in unterschiedliche Gruppen. Typische ‘cleavages’…
Reading the excellent report by the Council of Europe on “Discrimination, Artificial Intelligence and Algorithmic Decision-Making”, I wondered to what degree algorithmic decision-making could serve to further exacerbate discrimination in already deeply divided societies.
To my surprise, the report catapulted me back almost 20 years to the time when I wrote my master thesis on “Presidential Systems in African States”. One of the main factors I found to impact whether presidential systems were a good choice or not, were the ‘cleavages’ in a country. In political science, a cleavage is a division of a community into different groups. Typical cleavages…
Ethics in #tech & #finance: #AIethics, #ESG, #CSR, #sustainability. Expert, consultant, public speaker, lecturer, author. #100BrilliantWomeninAI 2020.