The world’s most valuable and dominant internet companies are based in the US, but the nation’s unproductive lawmakers and business-friendly courts have effectively outsourced the regulation of tech giants to the EU. That has given tremendous power to Didier Reynders, the European commissioner for justice, who is in charge of crafting and enforcing laws that apply across the 27-nation bloc. After nearly four years on the job, he’s tired of hearing big talk from the US with little action.
Ahead of his latest round of biannual meetings with US officials, including attorney general Merrick Garland in Washington, DC, tomorrow, Reynders told WIRED why the US needs to finally step up, where a probe into ChatGPT is headed, and why he made contentious comments about one of the world’s most prominent privacy activists. His bicoastal tour began with a Waymo robotaxi ride through San Francisco (he gave it a rave review) and include meetings with Google and California’s privacy czar.
It’s been five years since the EU’s stringent privacy law, the GDPR, went into effect, giving Europeans new rights to protect and control their data. Reynders has heard a series of proposals for how the US could follow suit, including from Meta CEO Mark Zuckerberg and other tech executives, Facebook whistleblowers, and members of Congress and federal officials. But he says there has been no “real follow up.”
Although the US Federal Trade Commission has reached settlements with tech companies requiring diligence with user data under threat of fines, Reynders is circumspect about their power. “I’m not saying that this is nothing,” he says, but they lack the bite of laws that open the way to more painful fines or lawsuits. “Enforcement is of the essence,” Reynders says. “And that’s the discussion that we have with US authorities.”
Now Reynders fears history is repeating with AI regulation, leaving this powerful category of technology unchecked. Tech leaders such as Sam Altman, CEO of ChatGPT developer OpenAI, says they want new safeguards, but American lawmakers seem unlikely to pass new laws.
“If you have a common approach in the US and EU, we have the capacity to put in place an international standard,” Reynders says. But if the EU’s forthcoming AI Act isn’t matched with US rules for AI, it will be more difficult to ask tech giants to be in full compliance and change how the industry operates. “If you’re doing that alone, like for the GDPR, that takes some time and it slowly spreads to other continents,” he says. “With real action on the US side, together, it will be easier.”
ChatGPT is in the crosshairs of both privacy and AI-specific regulatory efforts.
OpenAI in April updated its privacy options and disclosures after Italy’s data protection authority temporarily blocked ChatGPT, but the conclusions of a full investigation into the company’s GDPR compliance is due by October, the country’s regulator says. And an EU-wide data protection task force expects by year’s end to hand down common principles for all member nations on dealing with ChatGPT, Reynders says. All that could force OpenAI to make further adjustments to its chatbot’s data collection and retention.
More broadly, while OpenAI’s Altman has supported calls for new rules governing AI systems, he has also expressed concern about overregulation. In May, headlines thundered that he had threatened to pull services from the EU. Altman has said his comments were taken out of context and that he does want to help define policy.
Reynders says Altman has significant business incentive to make nice with the EU, which has about 100 million more people than the US. “We have asked to have all the major actors in the discussions,” Reynders says. “We want to know their concerns and to see if we will solve that in legislation.” He insists that OpenAI shouldn’t fear new AI rules. “I’ve seen the origin of OpenAI. It’s quite the same idea—to develop new technologies, but for the good,” Reynders says.
But there is at least one area where he would like to push back. Reynders would like to see more AI technologies such as the text-generation models that power chatbots released as open source software, enabling other entities to build upon them. “We have seen huge investments from big tech like Microsoft—I don’t know how much, but certainly more than $10 billion,” Reynders says. “But is it possible to have an open market? Is it possible to see startups and many other companies taking part? To do that, open source is maybe an important element.”
Meta has not launched its new social media app Threads in the EU, due to unspecified regulatory concerns, and Google this week finally launched its chatbot Bard in Europe after months of working on regulatory compliance. While Reynders hasn’t talked with Meta about its situation—he jokes that “maybe with my services, they will be on board”—he says the EU wants to have all major services available to its citizens.
But having Bard and Threads in full compliance with GDPR is first priority for the EU, he says. He recognizes that user-supplied data helps tech companies train the AI systems that are increasingly central to all platforms, but he says there must be transparency about that process and limits on holding on to data.
Reynders has proposed legislation that would allow people harmed by AI systems to win compensation from technology developers. He says European lawmakers want to first pass the AI Act’s comprehensive regulations on AI systems, but that the liability proposal can’t wait long, because EU parliamentary elections next June could reshape the bloc’s priorities.
He also plans to urge tech companies to voluntarily comply with yet-to-be-passed rules such as the AI Act, which likely won’t take effect for a couple of years. For instance, images and videos generated with AI should have watermarks reflecting their origins, Reynders says. He also believes chatbots should be barred from answering questions on certain sensitive topics, and that hidden uses of AI in society should be disclosed to users.
Reynders’ US visit coincides with a joint win for EU and US officials. They finalized the third—and they hope, final—agreement allowing companies to store EU citizens’ data on US servers. Reynders says the deal deliberately does not force companies to store data in the EU, where cloud storage capacity is relatively limited. “Store your data locally if it’s needed for your business,” he says. “But if you need to transfer, we try very hard to be sure that the protection is traveling with the data, but that you have the opportunity to transfer the data.”
Two previous transfer agreements have been rejected by the EU’s top court for failing to adequately protect against US authorities prying into the data. Both those challenges were lodged by Austrian privacy activist Max Schrems, and the cases are known as Schrems I and II. Reynders this week bemoaned that some groups had built a business model around bringing cases to the EU Court of Justice. Schrems’ nonprofit organization NYOB, short for none of your business, then demanded an apology for what it described as false allegations.
Reynders tells WIRED he intended only to highlight that he had no doubt that the new agreement would end up in court. “I regret it was a sad impression for him [Schrems]. We’re happy to have a Schrems III decision, but I’m hoping it will be a positive one,” Reynders says.