GenAI — friend or foe?

5gDedicated

Generative AI (genAI) could help people live longer and healthier lives, transform education, solve climate change, help protect endangered animals, speed up disaster response, and make work more creative, all while making daily life safer and more humane for billions worldwide. 

Or the technology could lead to massive job losses, boost cybercrime, empower rogue states, arm terrorists, enable scams, spread deepfakes and election manipulation, end democracy, and possibly lead to human extinction. 

Well, humanity? What’s it going to be?

California’s dreamin’

Last year, the California State Legislature passed a bill that would have required companies based in the state to perform expensive safety tests for large genAI models and also build in “kill switches” that could stop the technology from going rogue. 

If this kind of thing doesn’t sound like a job for state government, consider that California’s genAI companies include OpenAI, Google, Meta, Apple, Nvidia, Salesforce, Oracle, Anthropic, Anduril, Tesla, and Intel. 

The biggest genAI company outside California is Amazon; it’s based in Washington state, but has its AI division in California.

Anyway, California Gov. Gavin Newsom vetoed the bill. Instead, he asked AI experts, including Fei-Fei Li of Stanford, to recommend a policy less onerous to the industry. The resulting Joint California Policy Working Group on AI Frontier Models released a 52-page report this past week. 

The report focused on transparency, rather than testing mandates, as the solution to preventing genAI harms. The recommendation also included third-party risk assessments, whistleblower protections, and flexible rules based on real-world risk, much of which was also in the original bill. 

It’s unclear whether the legislature will incorporate the recommendations into a new bill. In general, the legislators have reacted favorably to the report, but AI companies have expressed concern about the transparency part, fearing they’ll have to reveal their secrets to competitors. 

Two kinds of risk

There are three fundamental ways that emerging AI systems could create problems and even catastrophes to people: 

1. Misalignment. Some experts fear that misaligned AI, acting creatively and automatically, will operate in its own self-interest and against the interests of people. Research and media reports show that advanced AI systems can lie, cheat, and engage in deceptive behavior. GenAI models have been caught faking compliance, hiding their true intentions, and even strategically misleading their human overseers when it serves their goals; that was seen in experiments with models like Anthropic’s Claude and Meta’s CICERO, which lied and betrayed allies in the game Diplomacy despite being trained for honesty.

2. Misuse. Malicious people, organizations, and governments could use genAI tools to launch highly effective cyberattacks, create convincing deepfakes, manipulate public opinion, automate large-scale surveillance, and control autonomous weapons or vehicles for destructive purposes. These capabilities could enable mass disruption, undermine trust, destabilize societies, and threaten lives on an unprecedented scale.

3. The collective acting on bad incentives. AI risk isn’t a simple story of rogue algorithms or evil hackers. Harms could result from collective self-interest combined with incompetence or regulatory failure. For example, when genAI-driven machines replace human workers, it’s not just the tech companies chasing efficiency. It’s also the policymakers who didn’t adopt labor laws, the business leaders who made the call, and consumers demanding ever-cheaper products. 

What’s interesting about this list of ways AI could cause harm is that all are nearly certain to happen. We know that because it’s already happening at scale, and the only certain change coming in the future is the rapidly growing power of AI. 

So, how shall we proceed? 

We can all agree that genAI is a powerful tool that is becoming more capable all the time. We want to maximize its benefit to people and minimize its threat. 

So, here’s what I believe is the question of the decade: What do we do to promote this outcome? By “we,” I mean the technology professionals, buyers, leaders, and thought leaders reading this column. 

What should we be doing, advocating, supporting, or opposing? 

I asked Andrew Rogoyski, director of Innovation and Partnerships at the UK’s Surrey Institute for People-Centred Artificial Intelligence, that question. Rogoyski works full-time to maximize AI’s benefits and minimize its harms. 

One concern with genAI systems, according to Rogoyski, is that we’re entering a realm where nobody knows how they work — even when they benefit people. As AI gets more capable, “new products appear, new materials, new medicines, we cure cancer. But actually, we won’t have any idea how it’s done,” he said. 

“One of the challenges is these decisions are being made by a few companies and a few individuals within those companies,” he said. Decisions made by a few people “will have enormous impact on…global society as a whole. And that doesn’t feel right.” He pointed out that companies like Amazon, OpenAI, and Google have far more money to devote to AI than entire governments. 

Rogoyski pointed out the conundrum exposed by solutions like the one California is trying to arrive at. At the core of the California Policy Working Group’s proposal is transparency, treating AI functionality as a kind of open-source project. On the one hand, outside experts can help flag dangers. On the other, transparency opens the technology to malicious actors. He gave the example of AI designed for biotech, something designed to engineer life-saving drugs. In the wrong hands, that same tool might be used to engineer a catastrophic bio-weapon.

According to Rogoyski, the solution won’t be found solely in some grand legislation or the spontaneous emergence of ethics in the hearts of Silicon Valley billionaires. The solution will involve broad-scale collective action by just about everyone.

It’s up to us

At the grass-roots level, we need to advocate the practice of basing our purchasing, use, and investment in AI systems that are serious about and capable with ethical practices, strong safety policies, and deep concern about alignment. 

We all need to favor companies that “do the right thing in the sense of sharing information about how they trained [their AI], what measures they put in place to stop it misbehaving and so on,” said Rogoyski.

Beyond that, we need stronger regulation based more on expert input and less on Silicon Valley businesses’ trillion-dollar aspirations. We need broad cooperation between companies and universities. 

We also need to support, in any way we can, the application of AI to our most pressing problems, including medicine, energy, climate change, income inequality, and others.

Rogoyski offers general advice for anyone worried about losing their job to AI: Look to the young. 

While older professionals might look at AI and feel threatened by it, younger people often see opportunity. “If you talk to some young creative who’s just gone to college [and] come out with a [degree in] photography, graphics, whatever it is,” he said, “They’re tremendously excited about these tools because they’re now able to do things that might have taken a $10 million budget.”

In other words, look for opportunities in AI to accelerate, enhance, and empower your own work.

And that’s generally the mindset we should all embrace: We are not powerless. We are powerful. AI is here to stay, and it’s up to all of us to make it work better for ourselves, our communities, our nations, and our world. GenAI — friend or foe? – ComputerworldRead More