The accessibility of artificial intelligence (AI) will transform the international landscape with “bad actors” empowering powerful regimes and leading to unprecedented societal disruptions, a risk analysis expert told Fox News Digital.
“We know that when you have a bad actor, and they have a single-shot rifle versus an AR-15, they’re not going to kill as many people, and the AR-15 is nothing compared to what we are. From artificial intelligence, this We’re going to see disruptive uses of the tools,” said Ian Bremer, founder and president of political risk research firm Eurasia Group.
In reference to improved capabilities for autonomous drones and the ability to develop new viruses, Bremer said “we have never seen this level of malevolent power in the hands of bad actors.” He said AI technology that is “more dangerous than an AR-15” will be in the hands of “millions and millions of people.”
“Most of them are responsible,” Bremer said. “Most of them won’t try to disrupt, to destroy, but a lot of them will.”
Biden’s team is providing our global AI leadership in the battle to advance a progressive agenda
Ian Bremer (Le Vogel/Getty Images for Concordia Summit/File)
The Eurasia Group published a series of reports earlier this year outlining the top threats to 2023, placing AI at No. 3 under “weapons of mass disruption.” The group listed “rogue Russia” as the top threat for the year, followed by “maximum Xi [Jinping]With “inflation shockwaves” and “Iran in a corner” behind AI – it helps to frame the seriousness of the risk that AI can pose.
Bremer said he is “enthusiastic” about AI and welcomes the great changes the technology could make in health care, education, energy transition and efficiency in the next five to 10 years, and “just about any scientific field you can imagine.”
“There is no pause button. These technologies will be developed, they will be developed rapidly by American companies and will be available very widely, very soon.”
He highlighted, however, that AI can spread misinformation and other negative effects that “in the hands of bad actors … will propagate.”
For example, he noted, there are currently “about a hundred people with the knowledge and technology to create a new smallpox virus” in the world, but the same knowledge or abilities may not be so immune to the potential of AI.
What are the risks of AI?
Fox News polls in AI (Fox News)
“There is no pause button,” Bremer said. “These technologies will be developed, they will be developed rapidly by American companies and will be available very widely, very soon.”
“I’m not necessarily saying, ‘Oh, the new nuclear weapon is X,’ but more that these technologies will be available to almost anyone for very disruptive purposes,” he added.
[NOTE: If you were to ask ChatGPT how to make smallpox, it will refuse and say that it can’t assist because creating or distributing harmful viruses or engaging in any illegal or dangerous activity is strictly prohibited and unethical.]Numerous experts have already discussed its potential to empower rogue actors and nations with more authoritarian governments like Iran and Russia, but in recent years technology has played a key role in empowering protesters and anti-government groups. action against their oppressors.
How US, EU, China Plan to Regulate AI Software Companies
Ian Bremer, founder and president of political risk research firm Eurasia Group, said AI technology that is “more dangerous than an AR-15” will be in the hands of “millions and millions of people.” (Getty Images/File)
Through the use of new chat apps like Telegram and Signal, protesters are able to organize and demonstrate against their governments. China was unable to stem the flood of video media that showed protests in various cities as residents grew weary of the government’s “zero COVID” policies, forcing Beijing to flood Twitter with posts about porn and escorts in an attempt to block unfavorable news.
Bremer cautions that the technology could prove helpful to the underdog, saying instead that it will help in cases where the government is “weak” but will prove dangerous “where governments are strong.”
“Remember, the Arab Spring failed,” Bremer said. “It was very different from the revolutions we’ve seen before in places like Ukraine and Georgia, and that’s because governments in the Middle East were able to use surveillance tools to identify and then punish people involved in protests. The state.”
Click here to get the Fox News app
“So, I’m concerned that in countries like Iran and Russia and China, where the government is relatively strong and, indeed, has the ability to effectively survey their population using this technology, there will be top-down properties of AI and other surveillance technologies. stronger in the hands of a few actors than it would be in the hands of the average citizen.”
“The communications revolution empowered people and democracies at the expense of authoritarian regimes,” he continued. “The data revolution, the surveillance revolution, which I think is actually amplified by AI, really empowers technology companies and governments who have access and control of that data, and that’s a concern.”