Non-Profit work
on UPCOMing AI Regulation
Thanks to a Senior Fellowship with the German Mercator Foundation (2022-23) and a Fellowship with the European New School of Digital Studies at Europa-Universität Viadrina (2023-24), I focused on the practical implementation of future European AI regulation by small and medium-sized enterprises and start-ups in the AI sector. The goal is to make sure AI developers understand and are on board for necessary regulation of Artificial Intelligence, and to give them a voice with regulators and policy-makers so upcoming legislation does not remain on paper only. I was based at AI Campus Berlin and also collaborated with GovTech Campus Germany.
​
My previous work as Fellow in Residence with the Mozilla Foundation focused on how the European Ethics Guidelines for Trustworthy AI will be implemented – whether directly or indirectly and if at all – in Silicon Valley and what the movement’s voice on these should be, under realistic circumstances, on the ground and in international fora. My perspective also incorporated other related EU regulation that affects AI, in particular the GDPR and the deriving obligation to implement the principles of “privacy by design” and “privacy by default” (Art. 25 GDPR).
​
With the US federal legislator under the Biden Presidency being relatively silent on tech rules, and following the success of the General Data Protection Regulation in setting the global standard for data protection, Europe doubled down on the expectation of being the de facto global rule-setter for technology. This role has been continued in recent years, albeit in a more adverse and still very fragmented global regulatoty environment that pushed Europe to advocate for its regulatory approach in a different way.
​
As a former German diplomat, I participated during the early beginnings of negotiations on GDPR in a high-impact role in the European Affairs team at the German Foreign Office. I know European priorities in this field and the realities of rule-setting in a partly supranational, partly intergovernmental setting of 27 governments and 512 million citizens.
Since I worked with tech companies as a privacy professional in the Bay Area specializing in GDPR (CIPP/E) for several years, and am now experiencing a corporate environment from the inside, I have personally witnessed that many US businesses apply GDPR standards to all their clients, not only those in Europe. While my clients then were mainly small and medium enterprises based in the US with only some clients in Europe, or with the mere intention of soon expanding to Europe, this privacy management strategy has been found with bigger tech firms as well. Organizations appreciate that there is a standard that is law in one part of the world, but can serve as a guideline also for other parts of the world, and even if this guideline is more demanding than legislation in their markets outside of Europe, it makes life easier for them to have one high-profile standard than many different ones (i.e. the growing “global privacy patchwork”) or none at all.
​
During my Mozilla Fellowship, I worked on finding out what impact the new European Ethics Guidelines for Trustworthy AI will have on US businesses, how useful they find these, as well as how they’re evaluated by activists, and whether we therefore will see a similar trend with them as we’ve been seeing with the GDPR. My work supported by ENS Viadrina and Mercator Foundation was a necessary follow-up and helped make regulation in the AI sector more impactful and more beneficial for society.