UPDATE: The Senate’s AI Insight Forum morning session was focused on the potential impact of AI — and what the government should do about its negative effects.
Senate Majority Leader Chuck Schumer (D-NY), who organized the gathering of CEOs, civil rights leaders and other industry reps, told reporters that, in terms of a timeline for legislation, “it can’t be days or weeks, not should it be years. It will be in the general category of months.”
More from Deadline
The need for regulation was shared by Elon Musk, who left early, but said afterward that it was “important for us to have a regulator which you can think of us as a referee, to ensure that companies take actions that are safe and that are in the interests of the general public.”
Among the topics discussed during the morning session were open source models, as well as security and privacy.
The president of the Writers Guild of America West, Meredith Stiehm, talked of the current strike, the guild’s main AI related issues and concerns over AI companies use of scripts as part of training models, according to a source who was in the room.
Charles Rivkin, the chairman and CEO of the MPA, talked of legislation that “seeks to address the downside risks without stifling innovation or compromising on our longstanding democratic ideals.”
At one point, Musk praised the MPA’s ratings system as an example of successful industry self-regulation, according to a source who was in the room.
Musk told CNBC that he told the gathering that AI was a “double-edged sword.”
“It was a very civilized discussion among some of the smartest people in the world,” he said. “I thought Senator Schumer did a great service to humanity here, with the support of the rest of the Senate, and I think something good will come of this. This meeting may go down in history as being very important for the future of civilization.”
Musk said that at one point Schumer asked for a show of hands of who in the room was in favor of regulation, and “I believe almost everyone did.”
Other tech CEOs left the morning session without saying much. Asked how it went, Bill Gates told reporters, “Fine.” Zuckerberg, who sat on the opposite side of the room from Musk, left down a flight of stairs instead of going past reporters, who still shouted questions at him.
Schumer said that there were differences on what type of regulation is needed. “These are the difficulties. Obviously, we have to try to thread the needle, and probably in some areas a lighter touch is called for, that are more benign, and in some areas a heavy touch is called for, which could be more dangerous.”
Schumer said that more AI forums will be held, with the Senate committees then tasked with coming up with regulations.
“The consequences of AI going wrong are severe, so we have to be proactive rather than reactive,” Musk told CNBC. He said that there was a likelihood of a regulatory agency for AI similar to the FTC or FAA.
PREVIOUSLY: Tech CEOs and other industry leaders converged on the stately top floor of a Senate office building on Wednesday for a closed-door forum with lawmakers focused on potential regulation of AI.
The sessions, organized by Senate Majority Leader Chuck Schumer, were meant as a listening session for lawmakers as some sound the alarm over the potential for AI to upend entire industries — and humanity itself.
The issue also is front and center in the labor strife in Hollywood, and one of the attendees is the president of the Writers Guild of America West, Meredith Stiehm.
Another attendee is Charles Rivkin, the chairman and CEO of the Motion Picture Association. The MPA’s position so far has been that existing copyright laws are currently sufficient to address concerns, but they also see promise in the technology.
The WGA has been focused on the potential for AI to replace writing positions, one of a plethora of issues that has left a stalemate.
Reporters staked out the third floor of the Russell Senate Office building, hoping to get a glimpse or short comment from one of the CEOs as they entered the session. Schumer has defended the closed door nature of the proceedings, although photographers and reporters were allowed a brief glimpse inside the Kennedy Caucus Room, where Musk was next to Palantir CEO Alex Karp, who was next to AFL-CIO President Elizabeth Shuler and Google CEO Sundar Pichai.
As Musk entered the room, smiling, he gave a brief wave at photographers. Zuckerberg was spotted just outside the caucus room chatting with Pichai. Others attending included Bill Gates, OpenAI CEO Sam Altman, and Jack Clark, Anthropic co-founder.
In opening remarks released by his office, Schumer told those gathered that the event is “truly unique, and it needs to be unique, because tackling AI is a unique, one-of-a-kind undertaking.”
He also referred to Congress’s lag time in addressing previously new technologies, if at all. Issues like privacy and antitrust have long been debated, but no substantial recent legislation has passed.
“In past situations when things were this difficult, the natural reaction of a Senate or a House was to ignore the problem and let someone else do the job,” he said. “But with AI we can’t be like ostriches sticking our heads in the sand. Only Congress can do the job, and if we wait until after AI has taken hold in society, it will have been too late.”
He said that government should play a role in establishing safeguards, because “even if individual companies promote safeguards, there will always be rogue actors, unscrupulous companies, and foreign adversaries that seek to harm us.”
In the industry, in addition to the impact that AI will have on the workforce, a major concern is over copyright.
At a recent Copyright Office event on AI, the MPA’s Ben Sheffner said that “while humans are and will remain at the heart of the creative process, we believe AI will be a powerful tool that can enhance the filmmaking process as well as the audience’s viewing experience and fan engagement.”
He said that when it comes to copyright law, he said that AI “raises many interesting questions.”
“Many of those questions implicate areas of law that are already well developed. There is not a reason yet to believe that existing doctrines cannot provide workable answers to those questions. What is most important is that courts, Congress, the Copyright Office, and other regulatory agencies approach these — based on limited experience with this technology.”
The issue is already in litigation. On Wednesday, Michael Chabon and other writers filed a class action lawsuit against Meta’s LLaMA platform for using their works to “train” the AI system. They filed a similar lawsuit against OpenAI, parent of ChatGPT, last week. Sarah Silverman and other content creators filed a lawsuit against AI ventures in July, and Barry Diller is helping to organize publishers in a litigation battle. Diller recently told podcast host Kara Swisher that he doubted that Congress will be able to establish parameters, leaving it to legal actions. Diller indicated that the litigation would challenge the idea that using copyrighted works to train AI systems was a “fair use” under copyright law.
At the Copyright Office session in May, the MPA’s Sheffner noted that “opinions seem very starkly divided on whether training AI systems on copyrighted works constitutes copyright infringement or whether it’s fair use, but we at the MPA simply don’t believe we can or should make definitive, blanket black or white pronouncements on these questions, especially at this still early stage of the technology’s development and implementation.”
Best of Deadline