OpenAI boss Sam Altman tells congress he fears AI is ‘harm’ to the world
The boss of OpenAI, the startup that developed ChatGPT, has told US lawmakers that he welcomes and is strongly calling for more regulation to prevent the “harms” of artificial intelligence (AI), particularly large-language models (LLM) like generative AI.
"My worst fear is that we, the industry, cause significant harm to the world. I think, if this technology goes wrong, it can go quite wrong and we want to be vocal about that and work with the government on that,” OpenAI’s chief executive Samuel Altman told Congress on Tuesday afternoon.
In addition to Mr Altman, senators heard from two other witnesses — Professor Gary Marcus of New York University, and Christina Montgomery, chief privacy and trust officer for tech giant IBM.
Here’s a sum up of some key points that were made during the hearing.
AI use in elections
Prof Marcus highlighted risks relating to AI systems subtly convincing people to change their beliefs without meaning to, referencing a recent article in the Wall Street Journal.
"We don’t know what ChatGPT 4 is trained on... how these systems might lead people about very much depends on what it is trained on. We need scientists doing analysis on what the political impact of these systems might be,” said Prof Marcus.
Groups of people who have historically been underrepresented or not had much access to this technology — this technology seems to have the ability to lift them up
OpenAI’s chief executive Samuel Altman
“We have this issue of potential manipulation... hyper-targeting is definitely going to come. Maybe it will be open-source models, I don’t know, but the tech is able to do that and will certainly get there.”
When questioned by senators on this point, Mr Altman said he agreed: “Given we're going to face an election next year... this is a concern. I do think some regulation on this would be wise. We need some rules on what’s expected in terms of disclosure from a company providing these models. I’m nervous about it.”
Senator Dick Durbin said that the AI industry seemed to have the message of “stop me before I innovate again”, compared to social-media networks, which have often not wanted to be held accountable for actions that happened on their platforms.
In response, Mr Altman emphasised that AI is a “very new technology” and that the technology industry needed a “liability framework” to work with.
He mentioned that previously, for a while, people were quite fooled by Photoshop, and then quickly acquired an understanding that photos could be photoshopped. It would be advantageous for the public to be able to understand this about content produced by ChatGPT, he explained.
Prof Marcus told Congress that he was vastly concerned about people’s safety when it comes to AI, warning that the US was now facing “the perfect storm”. He cautioned lawmakers to learn from what happened with social media.
When asked by senators about whether AI will take over all our jobs, he said: “Eventually, all jobs will be taken over by AI, but we are not that close to AI general intelligence now.”
The key term is AI general intelligence, which refers to a type of AI that doesn’t exist today, which could possess cognitive abilities equivalent to how humans think.
How to regulate AI
Asked about how AI should be regulated, Prof Marcus said that he recommended creating an “international agency”, where multiple governments came together to supervise and monitor the growth of AI.
“Some genies are out of the bottle, some are not — we don’t have machines that can self-improve yet, for example,” he told Congress.
“But there are other genies to be concerned about. We need to have some meetings very soon about how you build international agencies very quickly.”
The senators also quizzed IBM’s Ms Montgomery on what she thought about the EU’s proposed AI Act, which will require developers to establish how much harm their AI systems can cause from the get-go.
"Absolutely that approach makes a ton of sense,” Ms Montgomery said. “Guardrails need to be in place. We don’t want to slow down regulation to address real risks right now. We have existing regulatory authorities right now... a lot of the issues we're talking about span multiple domains.”
OpenAI’s Mr Altman said that AI models needed to be trained on a system of “values” developed by people around the world.
He also wants an independent commission whereby experts are able to evaluate whether the AI models are complying with regulations and have the power to both grant and take away licences.
“We’re excited to collect systems of values from around the world,” he told Congress.
“Groups of people who have historically been underrepresented or not had much access to this technology — this technology seems to have the ability to lift them up.”
Mr Altman was asked by Senator Cory Booker about whether ChatGPT would ever have adverts shown to users.
Altman replied that, while he couldn’t rule it out completely, he preferred a subscription model, that is currently offered with the latest version of ChatGPT.
Another senator quizzed OpenAI’s chief executive on whether he made much money. Altman responded that he did his job because he loved it, and didn’t earn a high enough salary to have health insurance offered as part of his compensation.
“Really?” replied Senator John Kennedy. “You need a lawyer or an agent.”
Should we stop developing AI?
The senators then grilled OpenAI’s Mr Altman, IBM’s Ms Montgomery, and Prof Marcus on whether other more drastic options should be considered to prevent the harms of AI, which everyone in the hearing room seemed to agree on.
“Why don’t we just let people sue you?” asked Senator Josh Hawley, in response to concerns from other senators that US regulators are already understaffed.
He suggested that any citizen or company who feels that they have been harmed by AI could just seek justice through the courts, meaning regulators would have less to do.
Another idea, other senators including Mr Booker pointed out, was for the tech industry to pause all development on AI until it is possible to establish all the risks and remedies.
OpenAI’s Mr Altman disagreed with the idea of taking a break, but he confirmed that his company is currently not training any AI models.
IBM’s Ms Montgomery was not keen, either. “I'm not sure how practical it is to pause but we absolutely should prioritise regulation,” she told lawmakers.
Senator Richard Blumenthal strongly disagreed with the idea of pausing AI innovation.
“The world won’t wait. The global scientific community won’t wait. We have adversaries that are moving ahead,” he warned Congress.
“Safeguards and protections yes, but a flat stop sign, sticking our heads in the sand, I would advocate against that.”