Skip to content

Can the Asimov’s three laws of robotics really be incorporated into robots and AI?

It is all about what you consider the three laws of robotics actually mean. What is not injuring humans? And what is inaction? This is not exact at all. So, a machine needs to determine exactly how to assess a situation and how to respond. Software can’t handle fuzzy. It needs details to decide something. hence, it requires interpretation to translate the laws of Asimov, a SciFi writer, into something useable.

Hence, in principle, there could be an implementation of the three laws. But it is a bit like encryption. Some mention a kill switch. Maybe in hardware, maybe in software. But, if we build in a backdoor in every protocol for safety, it compromises the safety itself. The loophole is available to others with bad intentions. Like the proposed backdoor in encryption protocols. As if the “bad guys” use the approved standard protocols to encrypt. Yeah, they know they are not bulletproof. So, with robots the same. The one who programs the robot is the one that decides about the good or bad behaviour of any machine.

Cylons? We call ’em toasters. Hence, the three laws of robotics are fiction, I’m afraid.

Interesting read as well: robots and careers.

Published inPOST