OpenAI's Q* is alarming for a different reason

OpenAI's Q* is alarming for a different reason

South Korean Go game fans watch a live broadcast of a match between Google's AI program AlphaGo and the world's then-top-ranked Go player on March 9, 2016. (Photo: AFP)
South Korean Go game fans watch a live broadcast of a match between Google's AI program AlphaGo and the world's then-top-ranked Go player on March 9, 2016. (Photo: AFP)

When news stories emerged last week that OpenAI had been working on a new AI model called Q* (pronounced "q star"), some suggested this was a major step toward powerful, humanlike artificial intelligence that could one day go rogue.

What's more certain: The hype around Q* has boosted excitement about the company's engineering prowess, just as it's steadying itself from a failed board coup.

Peaks of AI excitement about milestones have taken the public for a ride plenty of times before. The real warning we should take from Q* is the direction in which these systems are progressing. As they get better at reasoning, it will become more tempting to give such tools greater responsibilities. More than any concerns about AI annihilation, that alone should give us pause.

OpenAI hasn't confirmed what Q* is, with reinstated Chief Executive Officer Sam Altman only describing it as an "unfortunate leak", but in the media, it sounds similar to another system Alphabet's Google is working on. Gemini is the big new competitor to ChatGPT, which won't only generate text and images but also excel at planning and strategising, according to Google DeepMind CEO Demis Hassabis. DeepMind famously created an AI model that beat champion Go players, and Gemini will use some of those techniques for problem solving.

With Q*, OpenAI seems to be pushing ChatGPT in a similar direction since, according to multiple reports, Q* can perform grade-school math. That might sound unimpressive, but combining math capabilities with software that can also write text and create imagery is unique, and ChatGPT until now has struggled to do equations correctly. If it could, that might correlate with an improvement in problem-solving. Math requires understanding a problem and figuring out the steps to solve it before carrying out all the right calculations. That process is a little closer to how we humans think and solve problems.

Early versions of Gemini can already execute some tasks that require planning, according to someone with access to Google's forthcoming tool who didn't want to be named due to confidentiality commitments. Examples of tasks that Gemini can do are hard to come by, but the person who used the model said one could, for instance, ask it to conduct market research for a new product. It could then explore the web and come back with analysis and additional ideas. "With ChatGPT, you still have to hold the hands of the model a bit more," they added.

As more AI systems are able to independently do things like conducting research, their human operators will likely shift from giving them single tasks, to giving them broader responsibility for several duties. Think coordinating with colleagues over email or managing analysis. That will sound tempting to any company eager to make itself more efficient, but such tools also promise to displace jobs, particularly entry-level positions that ease young recruits into a new field.

Companies that outsource more to AI also risk baking gender and racial stereotypes into their work systems. Most firms must choose from a handful of so-called foundation models from OpenAI, Google or Amazon to upgrade their infrastructure, and such models have been called out for not only showing entrenched bias toward people with disabilities and racial minorities but also for being highly inscrutable. OpenAI and, by extension, Microsoft have refused to disclose details that independent researchers need to determine how biased their language models are.

Forrester Research has predicted that close to seven million US knowledge workers will be using Microsoft 365 Copilot in 2024. Meanwhile, Google is rolling out its own competitor, Duet AI, to the more than 3 billion users of its enterprise platform Workspace.

As such tools are imbued with more planning and strategic skills, we'll talk much more about giving them "responsibility" instead of "tasks". But we should do so slowly and carefully. Their anticipated disruption could come back to haunt us. ©2023 Bloomberg

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is the author of 'We Are Anonymous'.

Do you like the content of this article?
COMMENT (3)