LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize!

Lemoine sees consciousness at work in LaMDA. This does not show how amazing AI is, but rather how little we know about ourselves, comments Pina Merkert.

Lesezeit: 5 Min.
In Pocket speichern
vorlesen Druckansicht Kommentare lesen 1 Beitrag

(Bild: Jorm S /

  • Pina Merkert

(Hier finden Sie die deutsche Version des Beitrags)

Neural networks set their behavior with parameters and are therefore, theoretically, general function approximators. When the number of their parameters reaches billions, even the smartest data scientists lose track of exactly what the networks do at which point. The result is surprised people, whose astonishment at the alleged black box AI takes a wide variety of forms.

Google employee Blake Lemoine is surprised at how self-reflective the chatting AI LaMDA writes and decides to protect it. Since only a small portion of Google's own employees have access to LaMDA, the story fits perfectly into the narrative of a Hollywood blockbuster: in a secret lab, shady scientists rendered themselves guilty of hubris and created a mysterious being that could be fascinating and perhaps even dangerous. And while this sounds like the recipe for an exciting movie, you should not let the story fool you: Google's AI department knows what they've built. LaMDA is also just a great Transformer network, proficient at writing. That doesn't have to come as a surprise if you've seen GPT3 at work.

Commented by Pina Merkert

Pina Merkert lives on the border between dramaturgy and technology, between programming projects and hardware tinkering. The results are usually useful, but sometimes just cool. This is also true of her intense preoccupation with what is commonly referred to as artificial intelligence.

If you read the chat logs that Lemoine has published to Google's annoyance, however, you can still start to doubt. The machine writes self reflectively and with more clarity than many people we encounter in everyday life. Who are we to attribute consciousness to the rabble-rousing idiot from the subway, even though his behavior is anything but intelligent, and to deny the eloquent AI the same honor?

This question exposes that Lemoine's confusion is actually a philosophical problem: Descartes' famous phrase "I think therefore I am" permits us to be certain of our own existence. We are aware of ourselves so we have consciousness. But what about all other people? They look similar to us and behave similar to us. It seems obvious to assume the same kind of consciousness to them. But that it is not provable. Or, to say it with Ludwig Wittgenstein: We have no conditions allowing us to call machines conscious. Even if a machine would have consciousness, we cannot determine if this is true, since we never sufficiently defined the concept of consciousness. That's why we make our assumption on behavior and save ourselves from drawing a border that separates conscious life from unconscious things.

But the dispute between Lemoine and his employer is about exactly this border: Lemoine only applied his usual external view on consciousness to the thing LaMDA. In doing so, he could not help to put the AI in the same group as his fellow humans. Google's AI experts, however, know exactly where LaMDA's parameters come from and don't see the point of considering it less of a thing just because it's a better tool.

The problem is that the knowledge of AI experts is matched by a huge knowledge gap: How does human thought actually work? A human brain has several orders of magnitude more synapses than the largest AI models are able to simulate. Accordingly, brain researchers have little overview of the processes that take place in a human head. We assume that something decisively more awesome happens there than in an animal brain or a simulated neural network. But actually, we don't know if anything structurally different is happening at all. And we have no idea what exactly constitutes this crucial capability. Neural networks "only" do fully automatic statistics and already outperform humans in some applications. What if Descartes' thinking proves nothing more than that his automatic statistics found the famous theorem most appropriate?

Google has suspended Blake Lemoine for violating confidentiality obligations of his contract. The Internet made up a Hollywood story about it. Descartes wanted to be sure he existed. And what about us? We should set ourselves to the task to better understand our own thinking. Our need to feel special about what goes on in our own heads seems to exist in all humans from all cultures. But if we want to draw a border, we should also know where it lies. Personally, I have a hunch: it has to do with drawing borders between thoughts where the world actually presents us with a continuum. Somehow, these borders give us some advantage; otherwise humans wouldn't have come this far. But arbitrary borders can also introduce errors into our automatic statistics. And when we draw them differently, conflicts arise. In such a conflict, someone sometimes loses their job. "I fired, therefore I am." – Descartes would be thrilled.