Fakty Miami
OpenAI CEO Sam Altman said Wednesday his company could “cease operating” in the European Union if it is unable to comply with the provisions of new artificial intelligence legislation that the bloc is currently preparing.
“We’re gonna try to comply,” Altman said on the sidelines of a panel discussion at University College London, part of an ongoing tour of European countries. He said he had met with E.U. regulators to discuss the AI act as part of his tour, and added that OpenAI had “a lot” of criticisms of the way the act is currently worded.
Altman said that OpenAI’s skepticism centered on the E.U. law’s designation of “high risk” systems as it is currently drafted. The law is still undergoing revisions, but under its current wording it may require large AI models like OpenAI’s ChatGPT and GPT-4 to be designated as “high risk,” forcing the companies behind them to comply with additional safety requirements. OpenAI has previously argued that its general purpose systems are not inherently high-risk.
“Either we’ll be able to solve those requirements or not,” Altman said of the E.U. AI Act’s provisions for high risk systems. “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”
Read More: OpenAI CEO Sam Altman Asks Congress to Regulate AI
The law, Altman said, was “not inherently flawed,” but he went on to say that “the subtle details here really matter.” During an on-stage interview earlier in the day, Altman said his preference for regulation was “something between the traditional European approach and the traditional U.S. approach.”
Altman also said on stage that he was worried about the risks stemming from artificial intelligence, singli