(sublinhados meus)
Daniel Dennett’s last interview: ‘AI could signal the end of human civilisation’
Do we still need philosophers? Daniel Dennett, who died last week, believed strongly that we do. ‘Scientists have a tendency to get down in the trenches and commit themselves to a particular school of thought,’ he told me from his home in Maine, not long before he died. ‘They’re caught in the trenches so a bird’s eye view can be very useful to them. Philosophers are good at bird’s eye views.’ Scientists of all sorts valued their conversations with him, ‘I think because I could ask them questions that they realised they should have had answers to,’ he said.
I grew up with Dennett. First, as a teenager, watching YouTube videos as he raged joyfully against the irrationalities of religion. Then, as an undergraduate, I came to his ideas about consciousness. He was perhaps best known as one of the four horsemen of the New Atheist movement – alongside Richard Dawkins, Christopher Hitchens and Sam Harris – and was there at the very beginning of cognitive science in the 1960s. ‘When I look back at some of the armchair philosophy about minds and psychology, done by philosophers who don’t know any science, sometimes it is just ludicrously wrong – comically mistaken – because they trust their intuition and their intuitions are just wrong.’
Dennett’s central mission was to demystify consciousness and bring it within the realm of science. ‘It seems to many people that they have this show going on in their heads,’ he told me. ‘You have to get rid of that idea because it’s just wrong. That is to say, you have got to show how the inner witness is composed out of things that are not themselves conscious.’
If there isn’t an inner me experiencing my thoughts, feelings and the things I see and hear, what is going on? ‘What’s happening in the brain is there are many competing streams of content running in competition and they’re fighting for influence. The one that temporarily wins is king of the mountain, that’s what we can remember, what we can talk about, what we can report and what plays a dominant role in guiding our behaviour – those are the contents of consciousness.’
This idea might seem esoteric, but it influenced how many in the world of AI conceive of thinking machines. Those acquainted with the workings of large language models, the technology behind ChatGPT and Google’s Gemini, will recognise a similarity in Dennett’s description of consciousness and the architecture of generative AI: parallel processing streams producing outputs that compete for salience.
So why do we find it so intuitive to think of ourselves as an inner being, an occupant in our bodies? ‘It’s a sort of metaphor. I like to say it’s a user illusion,’ he told me. Imagining an inner person allows us to communicate our motivations to other human beings and in turn communicate them to ourselves. ‘To put it bluntly and horrifyingly, if a normal human baby could be raised on a desert island or raised in a laboratory without interacting with any people, without learning language, that human being would have consciousness hugely different from ours, in the way that chimpanzee consciousness is hugely different from ours. It is to our consciousness roughly what birdsong is to language.’
While language allows us to articulate our inner lives, it also divides cultures, right down to the way we process information. Dennett explains it using the example of our perception of colour: ‘Different cultures have different ways of dividing up colour,’ he said. ‘There are a lot of experiments that show that what colours you can distinguish depends a lot on what culture you grew up in.’ There is evidence, too, that westerners process people’s faces differently to non-westerners. The very movement patterns of our eyeballs are dictated by culture.
Recognising these cultural differences didn’t lead Dennett into moral relativism. ‘I am relieved not to have to confront some of the virtue-signalling and some of the doctrinaire attitudes that are now running rampant on college campuses, I retired just in time,’ he said. ‘I think that some of the multiculturalism, some of the ardent defences of multiculturalism, are deeply misguided and regressive and I think postmodernism has actually harmed people in many nations. Take the most obvious cases: the treatment of women in the Islamic world; the horrific reactions to homosexuality in many parts of the world that aren’t western. I think that there are clear reasons for preferring different cultural practices over others.’
I asked him about the risks of AI. Many in the field talk of an existential threat from machines that can think as well as, if not better than, humans. His response was characteristically practical. ‘Part of the problem here is that some thinkers have looked ahead and seen the possibility of strong artificial general intelligence becoming conscious, becoming alive and taking over the world and enslaving us. Yawn!’
So what was worrying him? ‘We are spending more of our lives in the digital environment. Evolution has not prepared us well for doing that because it’s now altogether too easy to make counterfeit people, deep fakes. I mean here we are, you and I, we are talking to each other over Zoom, I can see you and you can see me but, as you know, it’s not just a theoretical possibility, it’s a practical possibility that you’re not talking to me at all, you’re talking to a fake Dennett, a deep fake, and you’re having the wool pulled over your eyes. If we don’t create, endorse and establish some new rules and laws about how to think about this, we’re going to lose the capacity for human trust and that could be the end of civilisation.’
Sem comentários:
Enviar um comentário