OpenAI Proactively Addresses Data Privacy Anxieties in the EU

OpenAI Proactively Addresses Data Privacy Anxieties in the EU
Image Credits: David Paul Morris/Bloomberg / Getty Images

Late last month, ChatGPT developer OpenAI was busy sending out emails revealing an upcoming upgrade to its conditions that appears to be meant to reduce its regulatory risk in the European Union, while much of Europe was still buried in the holiday chocolate selection box.

With multiple open investigations into data protection concerns linked to how the chatbot processes people's information and the data it can generate about individuals, including from watchdogs in Italy and Poland, the AI giant's technology has come under early scrutiny in the region over ChatGPT's impact on people's privacy. (Italy's intervention even caused ChatGPT to be temporarily suspended in the nation until OpenAI updated the restrictions and information it offers users.)

“We have changed the OpenAI entity that provides services such as ChatGPT to EEA and Swiss residents to our Irish entity, OpenAI Ireland Limited,” OpenAI wrote in an email to users sent on December 28.

OpenAI's Privacy Policy for Europe has also been updated in parallel, and it now states:

OpenAI Ireland Limited is the controller and in charge of processing your personal data as outlined in this privacy policy if you reside in the European Economic Area (EEA) or Switzerland. Its registered office is located at 1st Floor, The Liffey Trust Centre, 117-126 Sheriff Street Upper, Dublin 1, D01 YC43, Ireland.

Start date of the new terms of use: February 15, 2024; designating its recently founded Dublin-based company as the data controller for users in the European Economic Area (EEA) and Switzerland, where the General Data Protection Regulation (GDPR) of the EU is in effect.

Users are advised to deactivate their accounts if they don't agree with OpenAI's revised rules.

With the help of the GDPR's one-stop-shop (OSS) mechanism, businesses that handle the data of European citizens can simplify privacy supervision under a single lead data supervisory situated in an EU Member State, or, to use official regulation language, in the country where they are "main established."

Obtaining this standing essentially lessens the power of privacy advocates in other parts of the bloc to take independent action in response to complaints. Usually, instead, they would forward complaints to the chief supervisor of the main, established company for review.

Other GDPR regulators are still able to step in locally if they witness serious dangers. However, these therapies are usually just short-term. They are also unique in that a lead authority handles the majority of GDPR oversight, making them unique in nature. Because it allows the most powerful platforms to ease privacy oversight of their cross-border processing of personal data, this status has proven to be quite enticing to Big Tech.

A representative for the Irish Data Protection Commission (DPC) responded to TechCrunch when asked if OpenAI is collaborating with the country's privacy watchdog to secure main establishment status for its Dublin-based company under the GDPR's OSS. "I can confirm that Open AI has been engaged with the DPC and other EU DPAs [data protection authorities] on this matter," the representative said.

For comment, OpenAI was also approached.

The AI behemoth first hired people for back office and a few policy, legal, and privacy positions when it first opened its Dublin headquarters in September.

Out of the 100 opportunities advertised on its careers page, there are just five vacant positions based in Dublin as of this writing, suggesting that local hiring is still scarce. Applications are being accepted for a policy and partnerships lead position in Brussels, where candidates are asked to indicate whether they are available to work three days a week from the Dublin office. But San Francisco, California, is listed as the location for the great majority of the AI giant's open roles.

A privacy software engineer is needed for one of OpenAI's five positions located in Dublin. The remaining four are for the following positions: sales engineer, media relations, Europe lead, international payroll specialist, account director, and platform.

It won't be enough to just file some legal documents and click a box to get primary establishment status under the GDPR; OpenAI will also need to consider who and how many recruits it makes in Dublin. The corporation must persuade the privacy regulators of the EU that the entity situated in a Member State that it has designated as legally accountable for the data of European citizens is genuinely capable of influencing decisions regarding that data.

This entails having the necessary knowledge and legal frameworks in place in order to exert pressure and impose significant privacy safeguards on a US parent.

Stated differently, it should not be sufficient to build a front office in Dublin that only approves choices about products that are made in San Francisco.

Having said that, OpenAI might be considering X, the business that was formerly known as Twitter, which caused a stir when its ownership changed in the fall of 2022. has, however, managed to stay inside the OSS since Elon Musk assumed leadership, even if the volatile billionaire owner took a savage approach to X's local workforce, pushing away crucial specialists, and making what seem to be incredibly arbitrary product choices.

If OpenAI gains GDPR main established status in Ireland, obtaining lead oversight by the Irish DPC, it would join the likes of Apple, Google, Meta, TikTok and X, to name a few of the multinationals that have opted to make their EU home in Dublin.

The DPC, meanwhile, continues to attract substantial criticism over the pace and cadence of its GDPR oversight of local tech giants. And while recent years has seen a number of headline-grabbing penalties on Big Tech finally rolling out of Ireland critics point out the regulator often advocates for substantially lower penalties than its peers. Other criticisms include the glacial pace and/or unusual trajectory of the DPC’s investigations. Or instances where it chooses not to investigate a complaint at all, or opts to reframe it in a way that sidesteps the key concern (on the latter, see, for example, this Google adtech complaint).

Any existing GDPR probes of ChatGPT, such as by regulators in Italy and Poland, may still be consequential in terms of shaping the regional regulation of OpenAI’s generative AI chatbot as the probes are likely to run their course given they concern data processing predating any future main establishment status the AI giant may gain. But it’s less clear how much impact they may have.

As a refresher, Italy’s privacy regulator has been looking at a long list of concerns about ChatGPT, including the legal basis OpenAI relies upon for processing people’s data to train its AIs. While Poland’s watchdog opened a probe following a detailed complaint about ChatGPT — including how the AI bot hallucinates (i.e. fabricates) personal data.

Notably, OpenAI’s updated European privacy policy also includes more details on the legal bases it claims for processing people’s data — with some new wording that phrases its claim to be relying on a legitimate interests legal basis to process people’s data for AI model training as being “necessary for our legitimate interests and those of third parties and broader society” .

Whereas the current OpenAI privacy policy contains the much drier line on this element of its claimed legal basis: “Our legitimate interests in protecting our Services from abuse, fraud, or security risks, or in developing, improving, or promoting our Services, including when we train our models.”

This suggests OpenAI may be intending to seek to defend its vast, consentless harvesting of Internet users’ personal data for generative AI profit to concerned European privacy regulators by making some kind of public interest argument for the activity, in addition to its own (commercial) interests. However the GDPR has a strictly limited set of (six) valid legal basis for processing personal data; data controllers can’t just play pick ‘n’ mix of bits from this list to invent their own bespoke justification.

It’s also worth noting GDPR watchdogs have already been trying to find common ground on how to tackle the tricky intersection of data protection law and big data-fuelled AIs via a taskforce set up within the European Data Protection Board last year. Although it remains to be seen whether any consensus will emerge from the process. And given OpenAI’s move to establish a legal entity in Dublin as the controller of European users data now, down the line, Ireland may well get the defining say in the direction of travel when it comes to generative AI and privacy rights.

If the DPC becomes lead supervisor of OpenAI it would have the ability to, for example, slow the pace of any GDPR enforcement on the rapidly advancing tech.

Already, last April in the wake of the Italian intervention on ChatGPT, the DPC’s current commissioner, Helen Dixon, warned against privacy watchdogs rushing to ban the tech over data concerns — saying regulators should take time to figure out how to enforce the bloc’s data protection law on AIs.

OpenAI specifies that users in the UK are not covered by its legal basis move to Ireland and are instead under the jurisdiction of its corporate entity domiciled in Delware, the United States. (After Brexit, the UK no longer has to abide by the EU's GDPR, but it still has its own GDPR in national law, a data protection regulation that is still historically based on the European framework. This is about to change, though, as the UK is moving away from the bloc's gold standard for data protection with the passage of a rights-dilluting "data reform" bill through parliament.)