OpenAI CEO Makes Historic Capitol Hill Appearance, Calls for Regulation of AI

AI 0 305

On Tuesday, Sam Altman was sent to the U.S. Congress again by the new technology fallout - testifying before the U.S. Senate Judiciary Committee's Subcommittee on Privacy, Technology, and the Law on the topic of "Oversight of AI: Rules for Artificial Intelligence".


This was Altman's first appearance before Congress. During the three-hour hearing, the boyish-looking CEO shed his pullover and jeans for a blue suit and tie. He was joined on the stand by Gary Marcus, a longtime critic of artificial intelligence, and Christina Montgomery, IBM's chief privacy and trust officer.


The tone of congressional hearings featuring tech industry executives in recent years can be described as hostile and saber-rattling. But at Tuesday's hearing, lawmakers on Capitol Hill didn't seem to want to grill Altman roughly, and Altman largely agreed with lawmakers about the need to regulate his company and the increasingly powerful artificial intelligence technologies being developed by companies like Google and Microsoft. At this early stage of the conversation, however, neither they nor lawmakers can say what that regulation should look like.



Congress has always had a long regulatory reflex arc for new technologies. Now that ChatGPT has been introduced only six months past the deadline, it is clear that it is uncomfortable with new technologies by bringing the people in charge to Capitol Hill to discuss regulation. For months, companies large and small have been racing to bring increasingly generic artificial intelligence to market, pouring endless amounts of data and billions of dollars into it. Some critics fear the technology will exacerbate social harms, including bias and misinformation, while others warn that artificial intelligence could end humanity itself.


Christina Montgomery acknowledged at the hearing that AI poses clear risks to workers in a wide range of industries, including those previously thought to be unaffected by automation.


"I think it's important to understand and see GPT4 as a tool, not a creature," Altman said, taking a different view. Such a model "is good at getting things done, not jobs," and "will therefore make people's jobs easier without replacing them entirely. His company's technology may destroy some jobs, but it will also create new ones, and "it's important for the government to figure out how we want to mitigate that impact.


Gary Marcus, a well-known AI skeptic, had some tough questions and comments for Altman. "Humans have taken a back seat" as companies race to develop more sophisticated AI models with little regard for the potential dangers. He noted that OpenAI is also not transparent about the data it uses in developing its systems.


"I think that if something goes wrong with this technology, it could go very wrong. We want to be forthright," Altman admitted, "and we want to work with the government to prevent that from happening." Interestingly, some media outlets hailed this as an exciting admission, while others suggested it could be a breakthrough in Silicon Valley's deceptive performance.


Marcus expressed skepticism about Altman's prediction that new jobs will replace those eliminated by AI. "We have unprecedented opportunities here, but we also face a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability," he said.


Echoing Marcus' thoughts, Altman believes the U.S. should consider licensing and testing requirements for developing AI models. He proposes creating an agency that would license the development of large-scale AI models, establish safety regulations, and pass tests on AI models before releasing them to the public. Of course, independent experts would be needed to independently assess the models' individual metrics.


When asked which AI would need to be licensed, he cited the example that a model that could persuade or manipulate an individual's beliefs should face a "very high bar. "We believe that the benefits of the tools we have deployed so far far far outweigh the risks, but ensuring their security is critical to our work."


Altman also believes companies should have the right to say they don't want their data used for AI training, an idea that is being discussed in Congress. However, Altman also said that the public resource on the Web is available to everyone (fair game).


Altman also said he "wouldn't say no to advertising forever, but prefers a subscription-based model.


According to the New York Times, the angst in the congressional legislature did not spill over to Altman, who had a friendly audience among the members of the subcommittee. Members of the Senate Subcommittee on Privacy, Technology and the Law thanked Altman for meeting with them privately and agreeing to attend the hearing. New Jersey Democrat Cory Booker called Altman by his first name several times.


The newspaper also quoted dinner and meeting attendees as saying that Altman spoke about his company's technology during a dinner with dozens of House members Monday night and met privately with some senators before the hearing.


"The details make the difference." Altman's suggestions for regulation don't go far enough and should include restrictions on the use of AI in policing and biodata, said Sarah Myers West, managing director of the AI Now Institute, a policy research center, in an interview with The New York Times, "nor has the other side shown any signs of slowing down the development of the OpenAI ChatGPT tool ".


"It's so ironic to see the gestures of concern about harm from people who are quickly putting the systems that are causing these hazards to commercial use," she told reporters.


It's unclear how lawmakers will respond to calls to regulate artificial intelligence. Policymakers have deferred to the promises of Palo Alto and Mountain View for the past few decades. But as evidence grows that reliance on digital technology has serious negative social, cultural and economic consequences, both sides have shown a willingness - for different reasons - to regulate technology.


This hearing is the first in a series in which members of Congress hope to learn more about the potential pros and cons of AI and ultimately 'make the rules' for it to avoid a repeat of 'regulating social media when it's too late. Now, they still have time to make sure AI doesn't produce the same social ills, such as serious disruptions to U.S. elections.


Subcommittee members suggested an independent body to oversee AI; rules to force companies to disclose how their models work and the data sets they use; and antitrust rules to prevent companies like Microsoft and Google from monopolizing emerging markets.


During the hearings, some lawmakers continued to show the gap between Washington and Silicon Valley in terms of technological knowledge.


For example, South Carolina Republican Lindsey Graham repeatedly asked witnesses whether the speech liability protections against online platforms like Facebook and Google also apply to artificial intelligence.


Altman made the distinction between AI and social media several times with calmness and composure. "We need to work together and find a whole new approach," he said.


Christina Montgomery also called at the hearing for Congress to be careful about any broad rules that lump different types of AI together.


There should be an AI law similar to the proposed European regulation, which outlines different levels of risk. She called for rules that focus on specific uses, rather than regulating the technology itself.


"By its nature, AI is just a tool, and tools can be used for different purposes," she said, adding that Congress should "take a precise regulatory approach to AI.


She urged Congress to focus regulation on areas that could cause the greatest social harm.


May be interested in

Comments0

Comment Now

Feel free to express your views.