Who will regulate AI?

AI-300x251If the robots start taking over, you can’t necessarily expect the government to protect you.

That isn’t to say the public sector isn’t paying attention.  President Biden and Vice President Harris met recently with CEO’s of Microsoft, Alphabet Google’s and other leading artificial intelligence companies and pushed the message that AI products—particularly the generative AI found in trending apps like ChatGPT—must have safety protocols built in place before they’re released.

Among the current and potential risks that Biden, who is himself a ChatGPT user, warned about include those to individuals, society at large, and the country’s national security—ranging from violations of privacy, to skewed decisions about employment, to misinformation campaigns, to outright scams.

The May 4 meeting, which also included leaders from Antropic and OpenAI—the creator of ChatGPT—along with several administration cabinet officials and top aides, focused on issues like keeping public officials up-to-date on the development of AI, attempts to evaluate its safety, and the need to prevent malicious cyber-attacks.

Based on finding a balance between the potential of AI to improve people’s lives along with the concerns about safety, privacy and civil rights, Harris told the tech execs that the Biden administration would consider new legislation or regulations on artificial intelligence as needed. And Sam Altman of OpenAI later said he and other corporate leaders are “surprisingly on the same page” as the government.

Another way in which the federal government will dive into the AI space is by launching new research institutes thanks to a $140 million expenditure from the National Science Foundation. Federal departments and agencies soon will receive policy guidance from the Office of Management and Budget about their use of AI. And several top AI developers will undergo a public evaluation of their systems.

The potential for misinformation and “deepfakes”—in which life-like audio and video mimic a person’s speech and mannerisms in a potentially very convincing way—is also on the government’s radar. But while European governments have produced tough regulations against these bugaboos, U.S. action to date has included an executive order that instructs federal agencies to keep “bias” out of their AI use and the release of an AI Bill of Rights.

And while they may say they’re on the same page, multiple vows from tech companies to guard against propaganda and “fake news” around topics like elections, COVID-19 vaccines, and hate speech targeting specific ethnic groups have fallen short in the past.

So the extent to which AI will develop into a boon or a bane for individuals, businesses and society at large remains an open question.   Hopefully we will not end up relying on Isaac Asimov’s Three Rules of Robotics which the scifi author promulgated in the 1940’s.   Asimov’s rules are as follows:

  1. a robot may not injure a human being or, through inaction, allow a human being to come to harm;
  2. a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
  3. a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
  4. Asimov later added another rule, known as the fourth law, which supersedes the others and states “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Please note that ChatGPT was unavailable for comment to this blog.