It is all about what you consider the laws actually mean. What is not injuring humans? And what is inaction? A machine needs to determine exactly how to assess a situation and how to respond. It needs to be translated, hence interpretation. In principle, there could be an implementation of the three laws. But it is a bit like encryption. If we build in a backdoor in every protocol for safety, the loophole will be used by others with bad intentions. And the people that want to hide something using encryption, well, they for sure would not use the official protocols or standards. Because they know they are not bulletproof. With robots the same. The one who programs the robot is the one that decides about the good or bad behaviour of a toaster.
Can Asimov’s Three Laws of Robotics really be incorporated into robots and AI?