on UPCOMing AI Regulation
I have just started a Senior Fellowship with the German Mercator Foundation that focuses on the practical implementation of future European AI regulation by small and medium-sized enterprises and start-ups in the AI sector. The goal is to make sure AI developers understand and are on board for necessary regulation of Artificial Intelligence, and to give them a voice with regulators and policy-makers so upcoming legislation does not remain on paper only. I am based at AI Campus Berlin and also collaborate with GovTech Campus Germany.
My previous work as Fellow in Residence with the Mozilla Foundation focused on how the European Ethics Guidelines for Trustworthy AI will be implemented – whether directly or indirectly and if at all – in Silicon Valley and what the movement’s voice on these should be, under realistic circumstances, on the ground and in international fora. My perspective also incorporated other related EU regulation that affects AI, in particular the GDPR and the deriving obligation to implement the principles of “privacy by design” and “privacy by default” (Art. 25 GDPR).
In the past years, with the US federal legislator being relatively silent on tech rules, and following the success of the General Data Protection Regulation in setting the global standard for data protection, Europe has been doubling down on the expectation of being the de facto global rule-setter for technology.
As a former German diplomat, I participated during the early beginnings of negotiations on GDPR in a high-impact role in the European Affairs team at the German Foreign Office. I know European priorities in this field and the realities of rule-setting in a partly supranational, partly intergovernmental setting of 27 governments and 512 million citizens.
Since I have been working with tech companies as a privacy professional in the Bay Area specializing in GDPR (CIPP/E) for several years, I have personally witnessed that many US businesses apply GDPR standards to all their clients, not only those in Europe. While my clients are mainly small and medium enterprises based in the US with only some clients in Europe, or with the mere intention of soon expanding to Europe, this privacy management strategy has been found with bigger tech firms as well. Organizations appreciate that there is a standard now that is law in one part of the world, but can serve as a guideline also for other parts of the world, and even if this guideline is more demanding than legislation in their markets outside of Europe, it makes life easier for them to have one high-profile standard than many different ones (i.e. the growing “global privacy patchwork”) or none at all.
During my Mozilla Fellowship, I worked on finding out what impact the new European Ethics Guidelines for Trustworthy AI will have on US businesses, how useful they find these, as well as how they’re evaluated by activists, and whether we therefore will see a similar trend with them as we’ve been seeing with the GDPR. My work supported by Mercator Foundation is a necessary follow-up and will help make regulation in the AI sector more impactful and more beneficial for society.