A new survey by the AI Impacts research project, the University of Bonn, and the University of Oxford reflects growing unease among AI professionals about the technology’s swift advance in recent years.
The survey involved 2,778 contributors from industry publications and forums. One key finding: 10% fear machines could surpass human capability in all tasks within three years, and 50% see that as likely by 2047.
“While the optimistic scenarios reflect AI’s potential to revolutionize various aspects of work and life, the pessimistic predictions — particularly those involving extinction-level risks — serve as a stark reminder of the high stakes involved in AI development and deployment,” the researchers wrote.
AI could soon handle more tasks
Participants were asked to estimate when 39 AI tasks would become "feasible," meaning when a top AI lab could implement the task within a year. The tasks ranged from translating newly discovered languages to building a payment processing site from scratch. There was at least a 50% chance of feasibility for all but four tasks within the next decade.
The survey also probed into the timeline for achieving High-Level Machine Intelligence (HLMI) and Full Automation of Labor (FAOL). HLMI was defined as the point when unaided machines could perform tasks better and more cheaply than humans. FAOL would be reached when machines can fully automate an occupation more efficiently and cost-effectively than human labor. The participants estimated a 50% chance of achieving HLMI by 2047 — fully 13 years sooner than a 2022 survey estimate. For full labor automation, the likelihood was set at 50% by 2116, a significant 48 years sooner than earlier forecast.
Chris McComb, the director of the Human+AI Design Initiative at Carnegie Mellon University, who was not involved in the study, said in an interview it’s “extremely” unlikely all human occupations will become fully automatable by 2037.
“There are two competing forces at work here — the adaptability of everyday people and the fact that AI often struggles in novel situations,” McComb said. “Fortunately, the world is filled with novel situations! While AI becomes a more proficient problem-solver, humans will become increasingly important problem framers, finding ways to translate novel situations into familiar building blocks.
“In our research, we've started to see exactly that,” he said. “When we put together teams of humans and AIs, human members of the team take on a vital role by helping AI agents adapt to novel scenarios that they wouldn't be able to handle on their own. We refer to them as ‘AI handlers.’"
Selmer Bringsjord, director of the AI & Reasoning Lab at Rensselaer Polytechnic Institute, is also skeptical of the timeline. But he said in an interview that the vast majority of present jobs held by humans in technologized economies could be carried out entirely by AIs by 2050.
“An efficient way to see why is just to look at jobs in and revolving around transport and treat this domain as representative,” he added. “If an item needs to get from Bangalore, India to a remote lake in the Adirondack Park, and the item is, say, in a home in India..., well, every inch of the process of carrying this out will be done by AIs. That entails no human boxer-uppers (robots), no drivers (self-driving vehicles), no pilots (self-flying planes, & drones), etc. The box will just be outside the front door of a cabin on that lake, safe and sound, accomplished without humans.
“That this will be in place by 2050 is indubitable,” Bringsjord said.
Experts: AI will surprise us
The survey asked participants to assess the likelihood of certain AI traits by 2043. A large majority believed that within the next 20 years, AI-based tools would find unexpected ways to achieve goals (82%), be able to talk like a human expert on most topics (81%), and frequently behave in ways that are surprising to humans (69%). Additionally, by as early as 2028, many anticipate AI will often leave humans puzzled, unable to discern the actual reasons behind a system's outputs.
Significant worries about the potential misuse of AI also arose in the survey. Those concerns include AI being used to create and spread false information through deep fakes, manipulate large-scale public opinion, enable dangerous groups to develop powerful tools such as viruses and assist authoritarian regimes in controlling their populations.
The experts polled reached a strong consensus on the importance of AI safety research, with mixed views on AI's impact: more than two-thirds of the respondents (68%) believed the benefits of AI outweighed its drawbacks. But roughly 58% at least see the possibility of significant detrimental outcomes. The risk perception varied with questions asked: about half saw a better than 10% chance of human extinction or severe disempowerment due to AI. And one in 10 participants estimated at least a 25% chance of catastrophic outcomes, including human extinction.
That said, McComb is among those who remain optimistic.
“For a long time, humankind has harnessed powerful forces, from fire to atom-splitting,” he said. “The key lies in using principles of engineering and design to effectively and safely harness these forces, ‘designing away’ from destructive potential and towards productive good. AI is not a threat to be feared, but a design material to be used.”
Bringsjord, conversely, is among the pessimists. He described the PAID Problem, a theory of his own which he defined as measuring the danger or potential destruction of a given AI or group of AIs by ascertaining the level of Power, Autonomy, and Intelligence. He noted that chatbots have fairly high levels of autonomy; soon enough, that autonomy will grow as the data they factor in approaches all available data on Earth.
“I don't think we're talking ‘free will’ at the human level here, and there's no real creativity (which presumably requires max autonomy), but the level of autonomy coming will be extreme. Power is quite another matter, fortunately.
“Unless certain science and engineering is pursued and applied by all high-tech open-market democracies, future AIs that are at once powerful, autonomous, and intelligent will eventually pose an acute danger and may well eventually destroy us.”