top of page

My Year as a Senior Mercator Fellow: 

Navigating Europe's AI Governance Ecosystem and Bridging Gaps between AI Innovators and Civil Society

​

​

The buzz around Artificial Intelligence (AI) governance is hard to ignore these days, and rightfully so. But it's crucial to remember that this wasn't always the case; the spotlight on AI regulation truly intensified only last fall. However, experts, international organizations, and regulatory authorities in numerous countries have been pondering this issue for many years.

 

My journey into the realm of AI governance has been a meandering one. It has taken me from 

  • facilitating dialogue between the German government and tech companies in San Francisco and Silicon Valley to 

  • advising small companies and startups on adapting to GDPR, the European General Data Protection Regulation, and to 

  • nonprofit work preparing small tech companies for the EU's AI regulation while fostering connections with civil society.

Through this journey, encompassing the public, private, and philanthropic sectors, I've come to appreciate that the European Union's proactive stance on addressing AI governance was not premature. On the contrary, many international actors now rely on the collaborative efforts of European experts and their global allies. 

 

My journey has also exposed the divides we must overcome to achieve a common understanding of what's necessary for successful AI regulation. In my perspective, it's imperative, especially for Europeans and anyone who values diversity, choice, and protection of civil rights, to have confidence in the innovation generated by startups and their value for democracy compared to a oligopolized market dominated by Big Tech. Simultaneously, startups must comprehend the regulatory expectations, that hopefully we won’t compromise on in the current final phase of Brussels negotiations, and integrate them into their work promptly. 

 

Here's a timeline to illustrate where we stand in the political process concerning the AI Act:

 

​

​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

For a first analysis of the Commission proposal that I wrote together with Mozilla Foundation’s Max Gahntz (a fellow member of nefia e.V. – Network for International Affairs), see our blogpost of spring 2021. As you see, it's taken considerable time for the Commission's draft to navigate through the various EU institutions. We anticipate an agreement between the Council and Parliament by the end of this year or beginning of 2024.

 

This timeline provides the context for my Mozilla Fellowship in Residence spent in California, my Senior Mercator Fellowship at AI Campus Berlin, and my endeavors of the next months (spoiler alert – see below for more) along with my involvement in the AI ecosystem in both the US and Europe, including dialogue with the Brussels institutional players. These experiences have significantly shaped my perspective on the draft law.

 

What I've been interested in all along is how we can make sure that the rules proposed by the EU can genuinely produce the desired impact. That to me is eminently important and, in the words of philanthropy, “gemeinnützig”, benefiting our society. I discussed that with startups in California and carried these insights into my strategic consulting role with the European AI & Society Fund and two reports, one in 2021 and one in 2023 that I published with them. During that time I crossed paths with Stiftung Mercator and their team lead Carla Hustedt, who recognized the synergy between my work and her vision of studying concrete implementation. This recognition led to the Senior Fellowship opportunity. I am indebted to my hosts at AI Campus Berlin, Rasmus Rothe, Adrian Locher, Nicole Büttner, Janette Wiget, Hendrik Remigereau, Sif Björnsdottir, Arantxa Gomez, and Lars Zimmermann and Maximilian Maxa at GovTech Campus for their unwavering support throughout the year.

 

Here are the key requirements that companies with AI systems in certain categories must meet under the future AI Act:

 

​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

​

My aim under the Senior Fellowship hasn't been to provide individual companies and startups with advice on how to implement these new rules to remain competitive in the market. Instead, I emphasize the broader impact that a practicable definition and actionable guidance to those regulations for small and medium-sized companies can have:

 

  • It benefits people in Europe because our domestic AI industry is almost entirely “small”.

  • It aligns with our regulatory vision that small AI innovators have a chance to stand in a market dominated, often unfairly, by Big Tech.

  • It aids in enhancing public understanding of AI, in line with the principles of our democracy, because people understand what benefits AI brings, how to identify and limit its risk, and to what degree to trust it.

 

Over several years, I accompanied the General Data Protection Regulation (GDPR) first as a diplomat in EU coordination and in tech diplomacy outreach in the United States, and later as a certified data protection consultant in Silicon Valley. Over time, I observed how smaller innovators were further squeezed out of the market by Big Tech, an effect that was certainly not intended by policymakers, but that reinforced a trend that a failed competition policy of the last 20 years had already created. Big tech companies terminated vendor agreements because they assumed the vendor, usually a smaller company, would not guarantee compliance with GDPR at the same level and with a compatible approach. Instead, they internalized those functions, claiming comprehensive compliance with GDPR, since this would render data processing agreements unnecessary. In my view, this ploy primarily aimed to sever dependencies on service providers, preventing them from becoming or remaining independent market players. 

Such an oligopoly poses significant challenges to our society, especially in the context of AI. Typically, the more data consolidated under one roof, the more potent the AI system becomes, and its control with regard to human rights and democratic checks difficult

 

Drawing from that experience, I have been studying and publicly pushing what needs to be done to protect diversity and competition in the AI industry and encourage small AI industry to prepare in advance for regulatory compliance, so the AI Act does not produce a similar effect to GDPR.

 

I've dedicated my work in the past year to six key areas:

​

1. in-depth discussions with AI developers about the technical feasibility and political objectives of future requirements

​

2. building bridges between AI developers and civil society 

 

3. contributing to the public discourse on AI regulation, particularly the importance of small innovators and their unique challenges

 

4. participating in strategy definition on the AI Act within the AI federal association

 

5. influencing policymaking in Brussels and Berlin

 

6. expanding my network within the AI ecosystem in Germany and Europe.

 

This post serves as a follow-up to and complements my earlier posts discussing my first month into the Senior Fellowship and my mid-term summary at the six-month mark.

​

​

1. In-depth Conversations with AI Developers

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

 

 

The real heart of my project. 

 

I've had these kinds of conversations bilaterally with AI developers countless times throughout the year: How do the intended rules work, how are they technically implementable? What do you think is missing here? Is this concrete enough for you? How can transparency be created for you, how can the system be explained, how can outputs be tracked back? To what extent do you already do this, and what would be new?

 

Few AI developers at this point have concrete ideas about what practical compliance with the rules of the AI Act will look like. However, when presented with the specific requirements and how they’d apply to their work, they perceive them as an additional part of their quality assurance processes. This often comes with the hope that under the pressure of regulation, investors would grant them more time they need for thorough testing and quality control. Better and state-of-the-art quality control often gets bypassed these days for the sake of a fast launch to boost sales. Many engineers were motivated through my work to develop concrete ideas on how to achieve technical compliance and streamline it through code modifications, provided they are given the opportunity to do so.

 

It's becoming increasingly clear to decision-makers in startups that compliance with the AI Act will also entail more admin and documentation. That, if you know them, is against their nature. They want to be able to act quickly, make agile decisions, work paperlessly, lose no time, grow, and bring in return on investment. The additional effort that the future EU rules will require – mainly for systems in the areas of critical infrastructures, educational or vocational training, safety components of products, employment and recruiting, essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan), law enforcement, migration, asylum and border control management, and the administration of justice and democratic processes, that under the draft law all bear the category of “high-risk” – is a deterrent at first. 

​

I estimate that at least one person with a solid background in governance and regulatory affairs needs to be hired per AI company to take care of these things, at least for the initial establishment of an internal practicable governance mechanism and - for those who fall under the "high-risk" category - the registration in the planned EU register. Obviously, that's significant for a startup, with often less than a handful of staff. For a co-working space housing many AI companies, like AI Campus Berlin, one person could be hired who would serve all resident startups whenever the need comes up, and potentially a pooled team of regulatory experts later on, shared by all of them together. Whether pooled or not: most of the 400 member companies in Bundesverband Künstliche Intelligenz have less than 250 employees (the common threshold to count as an SME). With the upcoming regulatory burden, that means a high demand for new compliance staff. Law firms and consultancies bet on this upcoming need, but given the hourly rates of those firms and the need of a tailored approach that requires a regulatory expert to know the specificities of an AI product very well, I do not recommend spending too much on an external lawyer and instead build own internal know-how.

 

Again, this is new terrain for many startups in the AI space. However, if you look into the realm of companies that are in markets that are already regulated, whether small or large, it's nothing new at all. Someone who builds AI systems for the medical field, for insurance or financial instruments, is already faced and dealing with regulation on a daily basis. Due diligence, more intense testing and quality control is daily practice for these companies. I've spent a lot of time on making that point and encouraging AI builders to not shy away from these markets, because we need them there. I want to see European AI companies building responsible and compliant instruments for what the EU law calls “high-risk” areas. I want them to challenge non-European AI providers thanks to their innate understanding of European market access rules, norms and our values that led to the AI Act. Which is why I recommend calling those AI systems not “high-risk”, but “high-impact”, or “high-responsibility”. These are the areas where we should push smaller AI companies to increase their efforts and meet regulatory expectations early on. This is my hope as a citizen betting on the benefits of AI in a functioning democracy, upholding human rights and – call it as simple as it is – consumer protection.

 

What we also need are investors who understand these political and social contexts and think along with them. And who realize that investing in short-lived business models that don't consider the societal impact will not pay off in the long run. For them, in today’s market, adding time for quality assurance is a problem. But it shouldn’t be. In investor circles, awareness of upcoming AI Act requirements and their rationale still seems, on average, rather low. 

​

​

2. ​Building bridges between AI developers and civil society 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

 

 

 

 

 

​

Certainly the most tangible example of my building bridges between civil society and AI developers was a workshop I hosted at AI Campus in late June 2023. Eight team members of the advocacy Algorithm Watch, including their executive director Matthias Spielkamp, joined by an expert from Gesellschaft für Freiheitsrechte (Society for Civil Rights) represented digital civil society. 18 AI engineers from different Berlin-based companies contributed their tech perspective. The menu consisted of selected principles and regulatory requirements of the European AI Regulation, and we had both fun and important insights along the course of it. The event was quite a ride across concepts, convictions and technical arguments, and encouraged mutual understanding and curiosity between these two groups. I am deeply convinced that we need more ties and exchange between the two “silos” of digital advocacy and Small Tech, there is so much hidden overlap!

 

And as promised: In August, I took the concept to California! Mark Nitzberg, executive director at the Center for Human-Compatible AI (CHAI) at UC Berkeley, invited me on the beautiful East Bay campus to host a workshop on the AI Act, with extremely enriching and insightful contributions from the crowd we gathered. One insight I took from it is how far we are in Europe in building governance that tackles the many every day’s issues with AI, so far that I almost have a hard time by now abstracting the concrete requirements we’re working on to the bird’s eye level they (still) take in the US, while at CHAI, dedicated computer scientists discuss existential threats potentially rising from progress in AI, and what policies could make AI more “compatible with humans”. 

​

At the other end of the Bay Area, down in Silicon Valley, and literally at the other end of the spectrum of AI governance research compared with CHAI Berkeley, Irina Raicu, director of Internet Ethics at the Markkula Center for Internet Ethics at the University of Santa Clara, partners with me to host a workshop similar to the Berlin version, later this year, once the AI Act is finalized. 

 

Back in the time of my Mozilla Foundation Fellowship in Residence, I had held workshops and discussions in the US about the future AI Act on a regular basis. Now, I've been gratified to witness how perceptions of AI regulation have evolved on the US West Coast over the past year.

 

 

3. Contributing to the public discourse on AI regulation, particularly the importance of small innovators and their unique challenges:

​

​

​

​

​

​

​

​

​

​

 

 

 

 

 

 

From the beginning, my Senior Fellowship has not been about writing research papers or books but about engaging in operational work and bringing people together for meaningful exchange.

 

I've actively shared my findings and perspectives through panel discussions and expert gatherings. An explicit effort went into addressing very diverse audiences, such as 

  • the entrepreneur and investor initiative AI.Hamburg where I spoke on panels on two occasions,

  • the more political and think-tank heavy Digital Europe 2030 project of Alfred Herrhausen Gesellschaft, who appointed me to their expert panel and gave me the chance to give a keynote on "Democracy by Design",

  • the Democracy Summit of the “Mehr Demokratie” initiative, 

  • an association of investors and law firms, 

  • and of course at my homebase AI Campus Berlin, on “AI for and by users", with Robert Bosch Foundation Fellow Amy Sample Ward and Mozilla Head of Global Public Policy Udbhav Tiwari.

 

I also invested time to address multipliers in our media landscape. AI is a significant part of journalism already but it is unevenly distributed. Its potential and its challenges for society, for research, for industry, just like for the newsroom, is not yet fully understood by most opinion-makers in Germany. To change that, I invited a group of editors-in-chief of German-speaking media, ranging from local newsrooms to national print and digital outlets, to AI Campus Berlin in late January as part of the “Chefrunde - The media executives circle” program. They spent an immersive, high-level and comprehensive afternoon delving deep into the situation of AI development in Germany and Europe, the Campus’ objectives, the use of AI in journalism, biotech, car software, privacy engineering, and infinite sectors beyond these, and the ethics and upcoming governance of it. My goal was to place the topic with its manifold facets in the back of minds of our media and opinion makers and to prominently address the civil society perspective to them. And to continue my collaboration with Chefrunde’s founder Annette Milz, ten years after our first Chefrunde in San Francisco and many more in the years in-between.

 

We urgently need to overcome silos in the approach to software development, regulation and the legitimate concerns of civil society. Building bridges was a main concern for me throughout the year. 

  • I am an early member of Responsible Innovators, a professional network that has been forming in Berlin since last December. Jolanda Rose, Roberta Barone, Ana Chubinidze and many more have become important sparring partners for me and we are enthusiastic about future opportunities to collaborate.

  • Together with Jolanda, I brought Ryan Carrier, the founder of the non-profit For Humanity, to whose work for effective and globally recognized auditing of algorithms I have been contributing for three years, to AI Campus in Berlin in January for a lively and insightful discussion event on AI audits. It is never too early to professionalize audits on algorithmic systems, and ForHumanity has developed the best framework I have yet encountered globally. Definitely THE space to watch in coming months and years. And AI engineers, including my friends at AI Campus, are the first who should understand what AI auditors (who may come from the humanities, too, and in addition have a solid training in technical evaluation of AI systems) expect from them.

  • The Women in AI Ethics initiative has selected me as one of 100 Brilliant Women in AI Ethics 2023, and greater networking and advancement of women in the sector is near and dear to my heart.

 

I have also been using my blog to disseminate my findings and reflections:

  • In fall of 2022, I published a blog post with Maarten Stolk, a startup founder from the Netherlands, on "Why AI Regulation Can Foster Innovation." It was important to me to demonstrate that startup founders share my perspective, or better: complement it, that AI regulation is in no way an obstacle for small AI companies. Maarten is a leader in his field, and we agree that specifically for small AI firms, it is much better to navigate in a regulated environment – provided that regulation is done in a clear, comprehensible way and that enough guidance is given to companies without huge legal teams – than to move in a Wild-West like environment that leads them into legal uncertainty in case of harms caused by their AI, or err on the side of caution and not venture into “high-risk” = high-impact uses of AI at all.

  • Currently, another text is in the works on the technical implementation of AI regulation with Jan Zawadzki, the former head of AI at VW's software subsidiary Cariad, who is building a startup to get AI certification off the ground in Germany. Jan and I want to demonstrate that technical compliance with the AI Act is not rocket-science, not even for small AI builders without big legal teams. Advantages of pre-compliance before the actual start of enforcement of the Act, even on a voluntary basis for those uses categorized as “low-risk”, weigh heavier than disadvantages – that I honestly only see in the loss of launching speed (if that is a disadvantage at all) and the need for documentation and due diligence work that many startup people fear.

  • I also published blogposts at the one-month and six-months milestones of my Fellowship year with more details on the progress of my year at that point. If you like this post, you might find them interesting, too!

 

Flattering AND substantial: the German magazine AufRuhr gave me the opportunity to showcase my work twice in the last few months: 

​

​

4. Participating in the strategy definition on the AI Act within the AI federal association

​

​

​

​

​

​

​

​

 

 

 

 

 

 

 

 

The Bundesverband deutscher KI-Unternehmen (Federal Association of German AI Companies) appointed me to its expert group back in August last year. Since then, in the association's AI Regulation Steering Committee, I have been discussing online every two weeks with the association's management, staff and a handful of other experts, who come from academia and industry, about what stance the association should take on the topic of the EU AI Act, and how to establish the appropriate channels in Brussels. The 5-member group of experts that we are think very differently. I definitely bring a very strong civil-society oriented and consumer protection perspective, and from time to time I wondered how welcome this view is in an industry association that is at times more dominated by “regulation stifles innovation” slogans (that I don’t agree with). But I've always been encouraged and invited to bring in my perspective. It’s an important channel that too few civil society arguments reach, and I am taking the opportunity to voice my stances there.

 

 

5. Influencing policymaking in Brussels and Berlin

​

​

​

​

​

​

​

​

​

 

 

 

 

 

 

Influencing policy has been a significant part of my journey. Originally, I thought I had done my part with our Mozilla Fellows' consultation contribution to the Commission's white paper in summer 2020 and several discussions I had with Commission staff during the Mozilla Foundation Fellowship. Now, being a senior fellow in Berlin, the focus for me was on the German government, since during the second half of 2022, member states were negotiating the Council’s common position on the AI Act. In the end, however, the German government remained so passive about this legislative project last year that my focus turned on Brussels again. Several times I made stints to Brussels during the Fellowship, invited by the European Policy Center or other fora to invitation-only brainstormings on the AI Act, where I discussed my ideas with MEPs, their staff, Commission representatives, trade unionists, consumer protection experts, and others. In addition, I used the visits for meetings with other contacts from the institutions, law firms, NGOs and associations, and returned to Berlin each time fulfilled, energized, and motivated. (Not many people share that sentiment with me, Brussels and its EU ecosystem really is a source of energy for me!) Very much of what I could contribute to the mutual information and profitable exchange of my target groups all over Europe was gained from these conversations in Brussels. 

​

This is an ongoing endeavor, as we are now in the final stages of trilogue meetings and I'm off to Brussels again in a couple of days in the hope of presenting my arguments to lawmakers once more and encourage them to "keep the (regulatory) bar high" - but to also consider meaningfully helping small AI firms jump it. 

​

​

6. Expanding my network in the AI ecosystem in Germany and Europe

​

Keep in mind that I spent 10 years in the U.S. and came to Germany last summer with, admittedly, a certain network of contacts with institutions and personalities (after all, I had already been interviewed at re:publica as Mozilla Fellow in Residence in 2021 and had also managed projects and consulted with Zeit Foundation and Humboldt Institute for Internet and Society). During my Senior Mercator Fellowship year, I established bonds with many relevant players in my field and made my work and ideas better known – and in doing so, increased Stiftung Mercator’s role as a player in my field.

​

​

To sum up:

 

It has been a year filled with impactful work, during which I felt that I was making a meaningful impact, deeply involved in public debates, and forging invaluable connections. I've come to realize that my work within this niche is essential and remains ongoing. I'm excited to share that I'll be starting a fellowship at Europa-Universität Viadrina next week, with Prof. Philipp Hacker at European New School of Digital Studies, and a continued focus on AI startups and civil society—a testament to the impact and necessity of my work. 

 

The focus, I agree with Philipp Hacker, will be on the intersection of AI startups and civil society as I pursue my work at AI Campus Berlin and combine it with some teaching at Viadrina. I'll further develop workshops, engage in bilateral discussions on AI regulation, drive more expertise on AI audit, discuss standardization efforts and hopefully contribute to them, and actively contribute to pre-compliance efforts. 

 

Time is of the essence, and more startups and AI developers seek guidance on how to meaningfully prepare for the evolving regulatory landscape. It's my belief that supporting these smaller innovators and connecting and confronting them with the civil society perspective is vital if we aim to foster high-impact “made in Europe” AI applications responsibly.

2 slide requirements high risk_edited.jpg
IMG_1769_edited.jpg
Slide1_edited.png
Slide1.jpeg
IMG_0371_edited.jpg
Slide1_edited.jpg
StefanieLoos_SL20230313a0465_edited.jpg
bottom of page