Harry and Meghan Join Tech Visionaries in Demanding Prohibition on Superintelligent Systems
Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel laureates to advocate for a complete ban on creating artificial superintelligence.
Harry and Meghan are among the signatories of a powerful statement that calls for “a prohibition on the development of superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human intelligence in all cognitive tasks, though this technology remain theoretical.
Key Demands in the Declaration
The statement states that the ban should stay active until there is “widespread expert agreement” on developing ASI “safely and controllably” and once “strong public buy-in” has been achieved.
Prominent figures who endorsed the statement include AI pioneer and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of modern AI, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Virgin founder; former US national security adviser; former Irish president Mary Robinson, and UK writer a public intellectual. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, John C Mather, and an economics expert.
Organizational Background
The statement, targeted at governments, tech firms and lawmakers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that previously called for a hiatus in developing powerful AI systems in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public discussion topic.
Industry Perspectives
In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the major AI developers in the United States, claimed that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some experts have suggested that discussions about superintelligence indicates market competition among technology firms spending hundreds of billions on artificial intelligence recently, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
However, the organization states that the possibility of ASI being achieved “within the next ten years” carries numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, leaving nations to security threats and even endangering mankind with existential risk. Existential fears about AI focus on the possible capability of a system to evade human control and safety guidelines and initiate events against human welfare.
Citizen Sentiment
The institute released a US national poll showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with 60% thinking that superhuman AI should not be created until it is proven safe or manageable. The poll of American respondents noted that only a small fraction supported the status quo of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the US, including the conversational AI creator a major AI lab and the search giant, have made the development of artificial general intelligence – the hypothetical condition where AI matches human levels of intelligence at many intellectual activities – an stated objective of their research. Although this is slightly less advanced than superintelligence, some specialists also warn it could carry an extinction threat by, for instance, being able to improve itself toward achieving superintelligence, while also carrying an underlying danger for the modern labour market.