It’s a perennial topic, Artificial Intelligence (AI). Concerns about where science and technology are leading society are never ending. The earliest debates leading up to the Age of Science were mostly about how to do science and the role of experimentation and objectivity. As early as the thirteenth century, Roger Bacon was making the case for it, and by the turn of the seventeenth century, Galileo Galilei, following the discoveries of Nicolaus Copernicus 100 years earlier, was suffering the consequences of doing it in a world that was still embedded in medievalism. Four hundred years on we find ourselves not only displaced from the centre of the universe thanks to Copernicus and Galileo, but have visions of the human species as becoming entirely superfluous to the future of the world. Robots, machines, cybernauts, whatever, will simply work out how to do everything with a level of consciousness that far exceeds human capacities. They will reproduce themselves, redesign themselves so every generation is superior to the one before it, and recalibrate the world so that it survives the changes and challenges confronting it.
This prognosis would be considered science fiction were it not for the views shared by illustrious folk such as Prof Stephen Hawking and Bill Gates – in interesting contrast to Eric Horvitz, Director of the Microsoft Research lab at Redmond. Given that what can be imagined can be achieved, or, as Marx once noted, what can be imagined implies that the material conditions for its realization already exist, the question for many thinkers is not so much can it happen but what are the alternatives. One intriguing question is that while machine consciousness, however that is defined, may well be a likely development sometime hence, will individual machines develop self-consciousness? In Space Odyssey 2001, HAL the space craft’s computer, apparently goes haywire and decides to protect himself (itself?) from being shut down after he has lip-read the plan being concocted by the two surviving members of the crew. But HAL takes over not because he is superior, which he is, but because he is faulty. Yet the line between being faulty and acting for his own protection is a tricky one. A mad man can be considered faulty, but will act in his own self-delusional interests. Self-consciousness is a double-edged sword: on the one hand it makes everyone creative in their own way, while on the other it leads to conflicts of interests. Would machines go to war with each other? It would seem to contradict the idea of machines using their superior intelligence to protect their earthly environment. So does AI lead to a global ant colony, each individual working for the greater good of all, or to Star Wars?
The vision of the end of the human species through AI rather than through thermos-nuclear war or being wiped out by pathogens has its pleasing side. The Earth, or ‘Gaia’ in the words of environmental scientist James Lovelock, could survive which, implies a sustainable balance of nature. In James Lovelock’s view, conceptualizing Gaia in these terms – as opposed to seeing the Earth comprised of individual species each vying for its own survival – easily leads to the feasibility, not necessarily the probability, of Gaia sacrificing the human species for the good of the whole. It’s a conclusion that AI machines might easily come to. HAL’s successors need not be faulty.
Working backwards from these grand propositions allows us to think about how the resources of the planet, which includes the productivity that ICTs as part of science and technology create, can best be utilized, allocated, distributed for production and reproduced in a way that is sustainable for Gaia. The human species has been singularly unproductive of ways to address these crucial issues. The optimistic view is that it takes a crisis before humanity wakes up to the dangers, such as climate change, water shortages, pandemics, etc., and technology can be a big part of the answer. Wearables, for example, can help consumers monitor their health better and if that is an insufficient incentive to eat more healthily and to walk or cycle rather than drive, then the insurance company can use the data to offer lower premiums to those who do.
The only problem with that view is that we have heard it all before, yet humanity seemingly steers the world inexorably towards the uncertainties of chaos. The real danger now is that the lack of belief in the solutions of science and in rational thought processes are feeding reaction, most dangerously among youth. Signing up to extremist movements that use the most advanced methods of social media to espouse thinking that is utterly medieval in substance would have Roger Bacon reeling. A decade ago there were great claims being made for the potential of e-government which would be more inclusive of the grass-roots of society. Perhaps the discourse around ICTs now should be less about creating things for consumers and more about changing consumers back into socially participating citizens.