Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to advocate for a total prohibition on developing superintelligent AI systems.
Harry and Meghan are part of the group of a influential declaration that calls for “a ban on the creation of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in every intellectual area, though this technology have not yet been developed.
The statement states that the prohibition should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been achieved.
Notable individuals who endorsed the statement include technology visionary and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of modern AI, another AI expert; tech entrepreneur Steve Wozniak; UK entrepreneur Richard Branson; Susan Rice; former Irish president an international leader, and UK writer a public intellectual. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, John C Mather, and an economics expert.
The declaration, aimed at national leaders, tech firms and policy makers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a global political talking point.
In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the leading tech companies in the United States, stated that development of superintelligence was “now in sight”. Nevertheless, some experts have suggested that discussions about superintelligence indicates market competition among technology firms spending hundreds of billions on AI this year alone, rather than the industry being close to achieving any technical breakthroughs.
However, FLI states that the prospect of artificial superintelligence being achieved “within the next ten years” carries numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to national security risks and even endangering mankind with extinction. Existential fears about AI focus on the possible capability of a system to escape human oversight and safety guidelines and trigger actions against human welfare.
The institute released a US national poll showing that about 75% of Americans want strong oversight on sophisticated artificial intelligence, with six out of 10 believing that superhuman AI should not be developed until it is demonstrated to be secure or manageable. The survey of American respondents added that only 5% backed the status quo of rapid, uncontrolled advancement.
The top artificial intelligence firms in the United States, including the conversational AI creator a major AI lab and Google, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human cognitive capability at most cognitive tasks – an explicit goal of their research. While this is slightly less advanced than superintelligence, some experts also warn it could carry an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an implicit threat for the contemporary workforce.
Tech enthusiast and web developer with a passion for sharing knowledge and exploring the digital frontier.