The Duke and Duchess of Sussex Join AI Pioneers in Calling for Prohibition on Superintelligent Systems

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on developing superintelligent AI systems.

The royal couple are part of the group of a powerful statement that demands “a prohibition on the development of superintelligence”. Superintelligent AI refers to AI systems that would surpass human cognitive abilities in all cognitive tasks, though such systems remain theoretical.

Primary Requirements in the Declaration

The statement insists that the ban should remain in place until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “strong public buy-in” has been secured.

Notable individuals who endorsed the statement include technology visionary and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of modern AI, another AI expert; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; ex-head of state Mary Robinson, and British author Stephen Fry. Other Nobel laureates who endorsed include a peace advocate, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.

Organizational Background

The declaration, aimed at governments, tech firms and lawmakers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a pause in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made AI a global political talking point.

Industry Perspectives

In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the major AI developers in the United States, stated that development of superintelligence was “approaching reality”. However, some experts have suggested that discussions about superintelligence reflects competitive positioning among tech companies investing enormous sums on artificial intelligence this year alone, rather than the sector being close to achieving any technical breakthroughs.

Potential Risks

However, the organization states that the prospect of ASI being achieved “within the next ten years” presents numerous risks ranging from replacing human workers to erosion of personal freedoms, leaving nations to security threats and even threatening humanity with extinction. Deep concerns about artificial intelligence focus on the potential ability of a AI system to escape human oversight and protective measures and trigger actions against human welfare.

Public Opinion

The institute published a US national poll showing that approximately three-quarters of US citizens want robust regulation on sophisticated artificial intelligence, with six out of 10 thinking that superhuman AI should not be created until it is demonstrated to be secure or controllable. The poll of American respondents added that only 5% backed the status quo of rapid, uncontrolled advancement.

Corporate Goals

The top artificial intelligence firms in the US, including the conversational AI creator a major AI lab and the search giant, have made the creation of human-level AI – the theoretical state where AI matches human cognitive capability at many intellectual activities – an stated objective of their work. Although this is slightly less advanced than superintelligence, some experts also warn it could carry an existential risk by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also presenting an underlying danger for the contemporary workforce.

Susan Brown
Susan Brown

A mindfulness coach and writer passionate about helping others unlock their potential through daily practices and self-reflection.