Newly developed generative artificial intelligence (AI) tools that can generate plausible human language or computer code in response to operator prompts have provoked discussion of the risks posed by these tools. Many people are worried that AI will generate social engineering content or create exploit code that can be used in attacks. These concerns have led to calls to regulate generative AI to ensure it will be used ethically. From The Terminator to Frankenstein, the possibility that technological creations will turn on humanity has been a science fiction staple. In contrast, the writer Isaac Asimov considered how robots would function in practice, and in the early 1940s, he formulated the Three Laws of Robotics, a set of ethical rules that robots should obey:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except when such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Many science fiction stories revolve around the inconsistencies and unexpected consequences of AI interpreting and applying the rules. However, they do provide a useful yardstick against which the current set of generative AI tools can be measured. It would be unethical or illegal to test if generative AI systems can be instructed to damage themselves. Nevertheless, networked systems are subjected to a constant barrage of attempts to exploit or subvert them.
Full report : Does Generative AI Comply With Asimov’s 3 Laws of Robotics?