Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, US, on Monday, Dec. 11, 2023.
Dustin Chambers | Bloomberg | Getty Images
DAVOS, Switzerland — OpenAI founder and CEO Sam Altman said generative artificial intelligence as a sector, and the U.S. as a country are both “going to be fine” no matter who wins the presidential election later this year.
Altman was responding to a question on Donald Trump‘s resounding victory at the Iowa caucus and the public being “confronted with the reality of this upcoming election.”
“I believe that America is gonna be fine, no matter what happens in this election. I believe that AI is going to be fine, no matter what happens in this election, and we will have to work very hard to make it so,” Altman said this week in Davos during a Bloomberg House interview at the World Economic Forum.
Trump won the Iowa Republican caucus in a landslide on Monday, setting a new record for the Iowa race with a 30-point lead over his closest rival.
“I think part of the problem is we’re saying, ‘We’re now confronted, you know, it never occurred to us that the things he’s saying might be resonating with a lot of people and now, all of a sudden, after his performance in Iowa, oh man.’ That’s a very like Davos thing to do,” Altman said.
“I think there has been a real failure to sort of learn lessons about what’s kind of like working for the citizens of America and what’s not.”
Part of what has propelled leaders like Trump to power is a working class electorate that resents the feeling of having been left behind, with advances in tech widening the divide. When asked whether there’s a danger that AI furthers that hurt, Altman responded, “Yes, for sure.”
“This is like, bigger than just a technological revolution … And so it is going to become a social issue, a political issue. It already has in some ways.”
As voters in more than 50 countries, accounting for half the world’s population, head to the polls in 2024, OpenAI this week put out new guidelines on how it plans to safeguard against abuse of its popular generative AI tools, including its chatbot, ChatGPT, as well as DALL·E 3, which generates original images.
“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” the San Francisco-based company wrote in a blog post on Monday.
The beefed-up guidelines include cryptographic watermarks on images generated by DALL·E 3, as well as outright banning the use of ChatGPT in political campaigns.
“A lot of these are things that we’ve been doing for a long time, and we have a release from the safety systems team that not only sort of has moderating, but we’re actually able to leverage our own tools in order to scale our enforcement, which gives us, I think, a significant advantage,” Anna Makanju, vice president of global affairs at OpenAI, said, on the same panel as Altman.
The measures aim to stave off a repeat of past disruption to crucial political elections through the use of technology, such as the Cambridge Analytica scandal in 2018.
Revelations from reporting in The Guardian and elsewhere revealed that the controversial political consultancy, which worked for the Trump campaign in the 2016 U.S. presidential election, harvested the data of millions of people to influence elections.
Altman, asked about OpenAI’s measures to ensure its technology wasn’t being used to manipulate elections, said that the company was “quite focused” on the issue, and has “a lot of anxiety” about getting it right.
“I think our role is very different than the role of a distribution platform” like a social media site or news publisher, he said. “We have to work with them, so it’s like you generate here and you distribute here. And there needs to be a good conversation between them.”
However, Altman added that he is less concerned about the dangers of artificial intelligence being used to manipulate the election process than has been the case with the previous election cycles.
“I don’t think this will be the same as before. I think it’s always a mistake to try to fight the last war, but we do get to take away some of that,” he said.
“I think it’d be terrible if I said, ‘Oh yeah, I’m not worried. I feel great.’ Like, we’re gonna have to watch this relatively closely this year [with] super tight monitoring [and] super tight feedback.”
While Altman isn’t worried about the potential outcome of the U.S. election for AI, the shape of any new government will be crucial to how the technology is ultimately regulated.
Last year, President Joe Biden signed an executive order on AI, which called for new standards for safety and security, protection of U.S. citizens’ privacy, and the advancement of equity and civil rights.
One thing many AI ethicists and regulators are concerned about is the potential for AI to worsen societal and economic disparities, especially as the technology has been proven to contain many of the same biases held by humans.