If we were to ask a random group of people to name, in their opinion, the 3 most famous films, or actors, or sports stars or fashion designers, it is likely we would receive a multitude of different names and titles in each of those 4 categories. Without having the time or resources to undertake such a survey, we turned to Google and ChatGPT (more on this later) to answer a question we wanted to pose; namely, who are the 3 most famous writers of science fiction? As a result of these searches, the most frequently referenced were; Arthur C Clarke, Robert A. Heinlein and Isaac Asimov. They were not the first science fiction writers, so did not define the genre; as great writers like H. G. Wells and Jules Verne, had already seen the potential for science to drive a story. What the ‘big three’ did, was expand its scope - particularly the style called "hard science fiction," where it became important to stick to the laws of the science they were writing about, as much as possible.
The one consistent comment made about Clarke, Heinlein and Asimov is their novels have been unerring at predicting the future. For example, images of the International Space Station are now commonplace, and we are mindful followers of this genre cannot help but recall Clarke's 2001: A Space Odyssey when reviewing these. That story was written in 1951 – moreover, we can go back to 1938 for Heinlein’s first novel, For Us The Living, unpublished during his lifetime, in which he describes a nationwide information network, from which the hero of the story is able to instantly access a newspaper article written during the previous century, ‘from the comfort of a friend’s home’. So became the World Wide Web.
It is, however, Asimov who is purposely our point of reference for this note – in particular, the Three Laws of Robotics he developed to guard against potentially dangerous artificial intelligence. They first appeared in his 1942 short story Runaround.
Lessons from the past - the Three Laws of Robotics
When he devised his laws, Asimov was thinking about androids. He envisioned a world where these human-like robots, who could think and reason as we do, would act like servants and there would be a need for a set of programming rules – or ethical guidelines - to prevent them from causing harm.
The laws were:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders from human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
-
A fourth law emerged in a book Asimov wrote much later in life (1985) which states ‘A robot may not harm humanity, or, by inaction, allow humanity to come to harm.’ So respected and foresightful, is this moral code, for keeping our machines in check the template was used by the South Korean government, when they first proposed a Robot Ethics Charter in 2007. It resonates even more so today when we start to think about the topic du jour in the AI space; ChatGPT.
The OpenAI Charter
According to its website, OpenAI is an AI research and deployment company with a ‘mission to ensure that artificial general intelligence benefits all of humanity’. Late last year, OpenAI released ChatGPT and it attracted one million users in 5 days. In January 2023, the platform had reached over 100 million users, making it the fastest-growing consumer application in history. We cannot add any more to the extensive coverage this has received around the world, however – and in the context of Asimov and his Three Laws – it is noteworthy OpenAI has published a charter which guides the business in acting in the best interests of humanity throughout the ongoing development of artificial general intelligence (AGI). It centres around broadly distributed benefits and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Long-term safety also features, with the business committing to doing the research required to make AGI safe, and to drive the broad adoption of such research across the AI community. The concerns expressed by Asimov are clearly still being heard – and quite loudly too.
So what comes next? - from AI to the metaverse.
Technology holds extraordinary potential for the development of humankind. However, regulation is not keeping up with the pace of change. Unless the world gets a handle on this, new or rapidly growing technologies - especially where they cross borders - could escape control, with unknown and potentially malign consequences. Moreover, technologies like synthetic biology or quantum computing offer the chance of tackling global problems like climate change, disease or pollution. But does “techno-optimism” present unforeseen risks; particularly when it is easy to get swept away with the hype and societies blindly reassure themselves tech has all the answers? We know online life is becoming steadily more immersive and our virtual lives are increasingly taking time out of our offline lives. As more companies and people move into the metaverse, its regulation (or lack thereof) is gaining attention. Concerns range from sexual abuse of avatars, to hate speech to financial crime. Massive advances in computing power and super-smart algorithms are shaping this area, and fast. The health and prosperity of societies will be influenced by how people live their digital lives, so our choices now regarding how the design of the parameters are crucial. The critical questions for us are can we govern at the speed with which technologies are changing? And who should regulate what?
Source: UNDP Signals Spotlight 2023