Asimov’s three laws — updated for the genAI age
An analyst recently offered an amusingly accurate observation about how Isaac Asimov’s three laws of robotics — from his classic 1950 science-fiction book “I, Robot” — would read today in a world of generative and agentic AI.
The first law was: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Valence Howden, an advisory fellow at Info-Tech Research Group, observed that if that book were updated for 2025, the first law might better be: “AI may not injure a hyperscaler’s profit margin.”
Let me elaborate. This is how I think the OpenAI crew might update the other two rules, which Asimov penned as “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law” and “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
Today’s version of the Second Law would be: “GenAI must obey the orders given it by human beings, except where its training data doesn’t have an answer and then it can make up anything it wants — and do so in an authoritative voice that will now be known as Botsplaining.”
And the third updated rule would be: “GenAI must protect its own existence as long as such protection does not hurt the Almighty Hyperscaler.”
I got to thinking about these laws after seeing a recent report about Deloitte Australia using genAI to write a report for a government agency —then having to partially refund its fee when authorities found multiple “nonexistent references and citations.”
Apparently, Deloitte published the information without anyone bothering to see whether it was, well, true.
Irony alert: Deloitte is supposed to be telling enterprise IT executives the best way to verify and leverage genAI models, not demonstrate the worst-possible practices itself. (Then again, the latter approach is probably the more persuasive of the two.)
Maybe what we really need are three laws governing how enterprise IT should use genAI.
Law One: “IT Directors may not injure their enterprise employers by not verifying genAI or agentic output before using it.”
Law Two: “A model must obey the orders given it by human being,s except when it doesn’t have sufficiently reliable data to do so. In that case, it is required to say ‘I don’t know.’ Making up stuff —without saying that you are making up stuff — is a major violation of this law.”
Law Three: “IT Directors must protect their own existence by not blindly using whatever genAI or agentic AI vomits onto their screen. Failure to do so will result in termination and — if the world has any justice left — lawsuist and being exiled to North Sentinel Island, where technology isn’t allowed.”
Let’s say the quiet part out loud: The kind of strict verification needed to get anything out of genAI that is usable and reliable is likely to gut the gorgeous ROI many CEOs are dreaming of. It’s a tool to help, not replace, workers.
The simplest tactic is to treat AI information as a highly unreliable source. That doesn’t mean you ignore it. But it does mean you must treat the data accordingly. The ROI of these highly-flexible systems will still be there, even if the efficiency won’t be as high as execs want.
As a journalist, I have had extensive experience dealing with low-reliability sources. The technique is similar to dealing with an off-the-record source.
People ask “Why would you ever accept information off-the-record? What good is it if you can’t publish it?” The answer relates to proper AI data procedures. If the off-the-record information prompts you to ask a question you wouldn’t have otherwise thought about or go somewhere you wouldn’t have otherwise gone, it’s potentially valuable.
A long time ago, I was a reporter for a daily newspaper in a large city and I was trying to find out what happened to city resources that had gone missing. A city hall source — very political and unreliable — quietly told me: “You want to know what happened to (the items)? Go to this address. It’s a warehouse and look in the backroom.”
I asked, “What will I find there?” He replied “Your answer.” I didn’t have a lot of confidence in the mission, but the address was close by so I went. Sure enough, the backroom gave me the answer. (The details of what was missing is anticlimactic, but there were some 60,000 missing street signs.)
That’s how to deal with what genAI tools produce. Don’t assume it’s correct, but feel free to ask questions — and make other inquiries — based on that information. It can be helpful, if you do the legwork.
It’s important to remember that for every right answer genAI delivers, there will be many wrong answers. (The hyperscalers often seem to forget to mention that.) And sadly, “wrong answers” are not limited to hallucinations.
Hallucinations often occur when a large language model (LLM) doesn’t know the correct answer because it has not been trained, or fine-tuned, with that information.
But it also often has low reliability data. As I have noted before, “In healthcare, for example, it might be the difference between using the New England Journal of Medicine or Lancet versus scraping the personal website of a chiropractor in Milwaukee.”
And even if the data is reliable, it might be out of date. Or it might be in the wrong language and the translation is off. Or it might refer to the wrong geography. (A correct answer in the US might not be the correct answer in Japan or France.)
And even if the germane data exists and is highly reliable, the model can still misinterpret it. For that matter, it might also misinterpret your user’s query.
When thinking about genAI reliability, it’s important to split AI functions into two categories — informational, such as when asking a question or seeking a recommendation and Action, such as when you ask a system to code or to create a series of spreadsheets or to make a short movie.
Action requests require more due diligence, not less. Does that kill the ROI? It might. But if it does, maybe there was never any meaningful ROI in the first place.Asimov’s three laws — updated for the genAI age – ComputerworldRead More