top of page

BLOG 

Why AI regulation can foster innovation

by Maarten Stolk and Julia Reinhardt

December 19, 2022

original source: Deeploy blog

Providing guardrails to operationalize AI

Over the years, many of us have experienced the difficulties of operationalizing AI, with challenges ranging from technical integration to acceptance of stakeholders or legal bottlenecks. Fortunately, over the past few years, more and more (high impact) AI applications go to market, in industries like banking, insurance, healthcare, and (even) governments. Models are used to detect fraud, monitor transactions, determine risks, help with administrative tasks or assist in repetitive decision-making. 

The general public often has reservations about the use of AI in areas that have so far been dominated by human decisions, particularly in Europe. Fortunately, this attitude is shifting, slowly but surely, with more understanding of the impact, opportunities, and risks of AI. With the realization that AI is here to stay and that public authorities see its importance, the view on AI is maturing. Whereas a few years ago, public authorities like central banks were hesitant to approve high-risk AI applications, more recently judges have ruled in favor of using AI, considering that it is, in specific cases, already more reliable and less biased than humans.

Still, the majority of projects never see the dawn of light. What’s usually the reason for that, besides technological immaturity or lack of funding? Is overregulation (in Europe) the scapegoat, as many say? Opposed to what we often read and see, it’s in our opinion, not the killer regulation that blocks AI innovations (or would block them, once more ambitious regulation takes effect), but rather the lack of regulation and hence standardization that leads to a series of undeployed AI projects. Here’s why:

Why AI doesn’t go to market

Traditionally, most AI projects followed a few iterative steps, including data exploration, modeling, and the deployment of models. Each step comes with new challenges, like access and quality of data, the accuracy of predictions, or difficulties to integrate with legacy systems.

This has led to too much focus on model development, while not enough focus has been put on the (AI) system. Hence, while the model has high accuracy, it is often hard to integrate and doesn’t fit too well with the way of working of its end users. What commonly follows are long internal discussions, trying to convince stakeholders to use what was built and not always taking criticism seriously.

Furthermore, risks are often overlooked. Data Privacy Officers (DPOs) are not always involved right from the beginning, and mature software practices are skipped or postponed to a later stage. This means important aspects of DevOps, SecOps and MLOps like version control, documentation, monitoring, traceability of decisions, feedback loops, and explainability are not included in the first versions of an AI system or thought about in a too-late stadium when too much of the system has already been established.

Embrace the design phase to build successful AI applications

We tend to underestimate the value of the design phase in AI projects, i.e. a design of the infrastructure, but also a UX/UI design of the interface toward end users.

By applying design thinking, we push ourselves to think about the end product that we would like to realize. This way, we raise questions much earlier about, for instance, the comprehensibility of model outcomes, the traceability of decisions, alerts, monitoring, and processes. This approach also forces us to think about integration right away, thinking on a system level, compared to a model level previously.

It also means the length of development projects increases dramatically. The question is: Is that actually a problem? In our experience it isn’t, as the probability of success also increases dramatically. It is much more likely that both end users, as well as compliance and risk officers, accept a product when developers can prove to have reflected well about both integration as well as risks, and mitigated the latter. Or, to put it differently: Simply don’t start building whenever you can’t deal with the risks in the design phase.

The role of regulators

Process standardization can help us out, pushing us to properly design an AI system before it’s built. The European AI Act, currently in its final negotiation stage on EU level, covers a full spectrum of criteria that are often all important, and easy to overlook or underestimate, especially at an early product stage. They are, in our experience, useful guardrails for AI developers to help incorporate useful functions during the design phase of an AI system.

For example, the requirement for effective human oversight forces us to think about ways to give humans the appropriate controls to work with AI for high-risk use cases.

Other important requests that are covered by the AI Act include:

Risk management systems

Data and AI governance

Technical documentation

Record-keeping and reproducibility, while adhering to privacy standards

Transparency and provision of information to understand the system

Accuracy, robustness and data protection

 

To be honest, these future requirements should be part of any development process for high-impact AI systems. Making these mandatory now – for high-risk applications of AI only – encourage developers to do their due diligence, as they would with any pharmaceutical product. And we believe in the case of a product in the high-risk category, the processes are and should be similar.

 

The European AI Act actually encourages companies to also use these as voluntary steps for non-high-risk applications of AI. By applying these criteria voluntarily, the question of whether a system is high-risk or not – which at this very moment is the “to be or not to be”-question a lot of AI start-ups are pondering in Europe, but also those overseas who sell AI products to Europe or want to invest – can be neglected. More focus must (or needs to) instead be put on a standardized process that authorities and customers can easily check upon.

 

The draft European AI Act sketches out another future requirement for high-risk systems that has the potential to create an impact for the better: the obligation to register all stand-alone high-risk systems (as defined in Title II of the regulation) that are made available, placed on the market or put in service in one of the countries of the European Union in a new EU-wide database. This brings more visibility, and therefore the potential for scrutiny. Registered AI systems would be searchable by anyone interested in a centralized European Commission database. What looks like a mere bureaucratic step could promise interesting side-effects to the industry: The database as a place to show features of your system, to expose it to competitors and consumers alike, potentially to the media, if interest arises, and, above all, to researchers. The potential to receive feedback, also on potentially harmful consequences of a system that developers did not have the means to identify themselves. Companies should not fear the risk and try to avoid it but embrace increased visibility as a tool to leverage optimization of the AI system, also vis-à-vis investors and other stakeholders pushing for fast success. It might prove interesting, and definitely facilitate processes, to use this register on a voluntary basis also for AI systems of the lower-risk categories. In the end, this might be a tiny but potentially mighty tool for AI companies to contribute to trust-building, and for smaller AI builders to show off the seriousness of their processes despite their lack of a podium in the general public. On a larger level, this could be a building block to enable democratic oversight that some AI systems, due to their impact on society, should allow for. The AI Act in that sense follows the typical strategy of seemingly mere bureaucratic European Commission requirements to become game-changers through the back door, at a relatively low price but with potentially high gains for both (well-intent) industry and citizens.

 

It is true that terminology in the draft European AI Act can be at times confusing, and the determination of risk categories leads to sometimes unfair extra burden. But in our experience, it is important to stress that the intended goal of the regulation is good for the industry, especially for smaller AI companies, which are supported (as well). Is my investment future-proof? Am I running into shady areas with this application, and what steps do I have to insert into my product-development process to avoid pitfalls? These are questions small AI developing companies cannot postpone to a later stage, when funding is already spent or when high-profile legal teams (that they do not have, contrary to big tech) must figure out what to make of a potential non-compliance. And second, to show that it is worth spending more time and care on due diligence and the reassurance of users: consumers will put more trust in us, and the system has more chances of standing out positively on the market.

About the authors:

Maarten Stolk is co-founder of Deeploy and Enjins, with a focus on solutions to deploy AI responsibly in high impact cases like fintech and healthtech. Previously a data scientist when he noticed and experienced the struggles of getting AI to production first hand.

Julia Reinhardt is currently a non-profit Mercator senior fellow focusing on AI governance for small AI companies at AI Campus Berlin. She’s a certified privacy professional and a former German diplomat working at the intersection of global politics and technology.

Mercator Senior Fellowship at AI Campus: first month insights

September 1, 2022:

It’s been a month since I started my year as Senior Fellow of Stiftung Mercator at AI Campus Berlin. A month full of insights, new encounters, introductions, more introductions, and a first INTENSE glimpse of what is discussed differently from the US in Germany in the sector of tech & society & AI governance. 

 

Above all, deep gratefulness for being welcomed so warmly by Rasmus Rothe, Adrian Locher, Janet Wiget, Sif Björnsdottir and Arantxa Gomez at AI Campus Berlin and by Lars Zimmermann and Maximilian Maxa at GovTech Campus. 

 

They keep opening doors for me, including me in meetings, and offering me useful advice for a Berlin returnee/newbie (10 years have changed the city and work culture so much!). At the same time, my workplace offers some daily taste of Silicon Valley (even the obligatory pingpong table is there now) so I don’t feel homesick.

 

Carla Hustedt at Stiftung Mercator and her team made all this possible, and it was therefore a sheer pleasure to meet everyone, including Chairman of the Executive Board Dr Wolfgang Rohe, in person at the Stiftung’s Sommerfest in Essen last week.

A random sample of first insights:

-       Discussions with AI developing folks about the planned European AI Act show more background info than in the US, obviously, since we’re much closer to Brussels here. Some organizations I spoke with had submitted comments to the European Commission White Paper already in summer 2020, which made it easy for me to track their position and have a fruitful and detailed discussion of pros and cons of the draft regulation. This is not to say that US-based organizations have per se less insights on EU plans, by far not, often their knowledge is more in-depth and more educated thanks to dedicated resources. But in Berlin, the EU regulatory know-how is more horizontally spread, and more word-to-mouth has happened in the AI community since the draft regulation was tabled in Brussels. However, this also means that misunderstandings and exaggerated interpretations are also more horizontally spread, and that’s an interesting field of action for me.

 

-       Small and medium-sized AI companies are the majority of my contacts here when looking at industry. That is due to the focus of my project, but also because the big AI builders are (sadly) not headquartered in Europe. This interestingly leads to much more overlap of positions between SME companies in the AI field and data rights activists. I had hoped for that, honestly. I want to highlight and better define and explain that overlap in the coming months, in order to build coalitions that we urgently need: to fight monopolization (which is not good for consumers and our civil rights faced with AI) and to push for diversity in the AI field. And in the end hopefully influence AI regulation in a way that makes it easier to implement and have impact.

 

-       I was hence delighted to join the German AI Association as an expert member this month. A deliberate step after my super in-depth discussion with its managing director Vanessa Cann. Although this is an interest-driven association of AI industry in Germany, I feel that there is a lot of openness and curiosity to discuss governance models, AI policies that benefit society (and not just a company’s bottom line) and to direct me to interesting sources and insights. I look forward to engaging with members through Slack, internal meetings and events during the year. 

 

-       My first public appearance as Senior Fellow was at fAIstival.hamburg Woche this week. Thank you, Petra Vorsteher of AI.Hamburg for the kind invitation! Tobias Straube interviewed me and turned my keynote on their international day into a really interesting, both critical and constructive conversation about AI governance, principles, practice, and we align both in the best possible way. 

 

-       GovTech Campus Germany is like Grand Central Station in terms of who I run into every day. This is the place to be for everyone in government and public sector interested in unlocking the potential of technology. I enjoy every encounter and admire how much has been transformed since I left public service years ago. The Campus also attracts publicly funded initiatives like Fiona Krakenbürger’s on open source infrastructure, whose work I find super insightful. And yes, I was portrayed in front of that famous concrete wall on my second day sur place… 

-       Finally, I’m gathering a list of impressive tech & society non-profits and digital rights activists I want to reach out to as my next step. European AI Fund’s Catherine Miller and my allround resource Peter Bihr have been of great help already. Lucky coincidence that Data Natives is happening right now in Berlin and Caroline Lair of The Good AI promises to reunite a bunch of interesting folks tonight – let’s go!

The EU’s New Draft AI Law Explained

by Maximilian Gahntz and Julia Reinhardt, June 11, 2021

original source: Mozilla Foundation Blog

In late April, the European Commission published its draft regulatory framework for AI. The proposal was a long time coming. In July 2019, then-candidate for the presidency of the Commission Ursula von der Leyen outlined her vision for the continent in a speech to the European Parliament. One of her priorities: artificial intelligence (AI). Within her first 100 days in office, she promised, the Commission would propose “legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.” 641 days later and following a White Paper and a public consultation, the Commission finally delivered.

The proposal is both the most ambitious and the most comprehensive attempt at reining in the risks linked to the deployment of AI we have seen so far. With the regulation, similar to the General Data Protection Regulation (GDPR) before, the EU is once again hoping to create a “Brussels effect.” This would leverage the size of the EU internal market to turn its rules into a de facto standard for “Trustworthy AI” in other parts of the world, too. But what do the proposed new rules entail exactly? And what would they mean for those developing and deploying AI?

The proposal is both the most ambitious and the most comprehensive attempt at reining in the risks linked to the deployment of AI we have seen so far.

What’s in the new rulebook?

 

The rules proposed by the European Commission wouldn’t cover all AI systems, but only those deemed to pose a significant risk to the safety and fundamental rights of people living in the EU. This risk-based approach has several layers, with different rules for different classes of AI uses: prohibited practices, high-risk uses of AI, and certain other uses warranting heightened transparency.

For some uses of AI, the Commission proposes an outright ban. This concerns AI systems that the Commission says pose an unacceptable threat to citizens: 

  • AI systems likely to cause physical or psychological harm to a person by subliminally manipulating their or others’ behavior

  • AI systems likely to cause physical or psychological harm to a person by exploiting their or others’ age- or disability-related vulnerabilities

  • AI systems used by public authorities or on their behalf for general-purpose social scoring, leading to the detrimental treatment of a person or group

  • Real-time remote biometric identification (i.e., most notably, some forms of facial recognition) in public spaces by law enforcement authorities, although with several exceptions

 

While the impetus to draw red lines with regard to certain practices is laudable, what exactly would fall under these practices and what would not remains vague. In their current form, whether these provisions will end up effective or a paper tiger would most likely be decided in court. With regard to remote biometric identification, the Commission is highlighting their strict rules for the technology - but the prohibition only covers a small fraction of what the technology may be used or abused for. Meanwhile, the rest will merely be treated as high-risk, imposing requirements on the technology’s use but not restricting it entirely.

 

The comprehensive system of requirements and obligations envisioned for high-risk AI systems - as well as for those developing and deploying them - is at the heart of the proposed regulation. A list of AI uses considered high-risk by the Commission is defined in an annex to the regulation. These include most of the uses commonly cited as being particularly problematic, like AI systems used in the recruiting, employment and admissions context; in determining a person’s creditworthiness or eligibility for public services and benefits; and some applications used in the context of law enforcement, security, and the judiciary. The list can be amended by the Commission in the future - but only in certain predefined areas. Novel and currently understudied risks, particularly for vulnerable and under-represented groups, may therefore fall through the cracks in the future. Notably excluded from the scope of the regulation are uses of AI in the military, or autonomous weapons systems. There are good reasons the Commission did not want to open this can of worms at this point - still, there is a public debate to be had and action required around this issue.

 

AI systems that are considered high-risk must meet a range of different requirements, undergo conformity assessments, and be registered in a public database before they can be placed on the EU market. In most cases, however, conformity assessments will not be carried out by third parties, but rather by developers themselves (or “providers'' in the lingo of the regulation). Providing for effective oversight here will be critical: After all, the GDPR has shown us how a regulatory regime built on self-assessments can be undermined if watch dogs aren’t equipped with sufficient resources to enforce it.

 

For three other types of AI use, additional transparency obligations are imposed. AI systems that directly interact with people, such as chatbots, must identify as non-human. The same is true for AI systems that make inferences about the emotional state of a person or about other potentially sensitive categories - such as sex, age, or ethnicity - based on their biometric data. Finally, and with some exceptions, synthetic media that is generated or manipulated using AI and that might be considered authentic by some - oftentimes also referred to as “deepfakes” - must be marked as such.

 

What does this mean for developers and users?

Developers will have to do their homework before being allowed to market high-risk AI systems in the EU, regardless of whether they are established on the continent or not. Most notably, the AI systems will have to comply with several requirements:

  • Risk management: Developers must establish a risk management system to identify and evaluate risks stemming from the intended use or from reasonably foreseeable misuse of the high-risk AI system. They must also develop suitable measures to eliminate or mitigate these risks.

  • Data and data governance: Training, validation and testing data sets must be relevant, representative, free of errors and complete - a tall order. Aspects that are particular to the specific geographical, behavioural or functional context in which the AI system is meant to be used must also be taken into account, which may require using data from the EU. 

  • Human oversight: High-risk AI systems must be designed in a way that allows people to effectively oversee their operation. Developers must also devise measures that enable those tasked with oversight to understand the capabilities and limitations of a system, to counter automation bias (i.e., humans’ tendency to defer to an automated system), to interpret and if necessary reverse or override the output, and to interrupt or stop the operation of the system.

  • Documentation and record-keeping: Documentation accompanying a high-risk AI system must contain information on the general logic of the system, design specifications, key design choices and assumptions, and the system’s architecture. It should also contain datasheets describing the training data and methodologies as well as an assessment of human oversight measures. In addition, the system must enable continuous logging during the operation of the system.

  • Accuracy, robustness and security: In order to pass the conformity assessment, high-risk AI systems must also achieve and maintain appropriate levels of accuracy, robustness and security for the intended use and the specific environment of operation. For continuously learning systems, the risk of bias caused by feedback loops must also be addressed.

  • Transparency: High-risk AI systems must be operated in a sufficiently transparent way in order for it to be used appropriately and for users to be able to interpret the outputs produced by the system. Developers also need to provide clear instructions for users, which should further include information on performance, risks, and necessary human oversight measures.

 

At least for now, developers will have to figure out on their own how to meet these requirements. However, the Commission could adopt EU standards or common technical specifications in the future so AI systems would need to comply with these.

 

In addition to complying with requirements, developers have further obligations: They need to establish a quality management system with the goal of ensuring compliance throughout the lifecycle of an AI system, register the system in a public database, and appoint a representative within the EU. Should developers become aware that an AI system they put on the market is no longer in compliance, they need to take corrective action or withdraw it and notify national authorities. 

 

In most cases, conformity will be self-assessed. But this is no free pass for developers. Government watchdogs can request documentation as well as access to data (and source code where necessary) and mandate developers to take action, withdraw AI systems from the market, or impose potentially hefty fines if a system is found to be in violation of the rules. 

 

Meanwhile, those actually using high-risk AI systems “on the ground” have their own obligations to comply with. Most importantly, they must use such systems in accordance with their intended purpose and the instructions provided by developers. Further, they need to monitor the operation of the system and notify developers of any unforeseen risks as well as of incidents and malfunctions. Should users deviate from the intended use of the system, substantially modify it, or market it under their own name, they themselves must comply with the obligations foreseen for developers.

 

What does this mean for the rest of the world?

Since this regulation concerns any organization providing AI systems in the EU or looking to enter the EU market (as well as influence European investors), once in effect, it will have ripple effects felt beyond the EU’s borders. Foreign companies with a strong presence in the EU market - like many in the US and the UK, but also increasingly from China - will be particularly affected. This could be true even before negotiations for this regulation between member states and the European Parliament are completed, as companies need time to prepare and adapt their practices. If Europe manages to become a first mover in this space, it is likely that transnational companies will apply European rules also in other markets, and governments will learn from the European experience and see which rules work - and which don’t. 

 

While Europe is now taking the lead in developing the rules that will govern AI going forward, others will most likely follow suit - especially given that people’s calls for better protections from technological harms are growing louder. It’s barely a coincidence that, the day before the Commission published its proposal, the US Federal Trade Commission (FTC) published a gentle reminder that it is watching closely what AI systems companies put into use and whether these comply with existing legal requirements. Ensuring at least some degree of compatibility around AI rules between Europe and the US would be of great relief for AI builders. And there may be some willingness to talk - US National Security Advisor Jake Sullivan, for instance, tweeted on the day of the draft regulation’s publication: “The United States welcomes the EU’s new initiatives on artificial intelligence. We will work with our friends and allies to foster trustworthy AI that reflects our shared values and commitment to protecting the rights and dignity of all our citizens.”

 

Many companies developing AI already understand the necessity of setting rules for AI and have considered taking a more cautious approach until political guidance is available. Most of the big players in the US have explicitly called for regulation, many have developed their own internal ethical guidelines, and some have pulled out of certain fields of application for the time being. 

For companies, it is therefore critical that the EU’s new rules won’t take them by surprise like the GDPR did - particularly for those that cannot afford large legal and policy teams monitoring political developments in Brussels. 

 

How can developers prepare?

The EU policy process is a long and winding road and it will take a while before we know what the final version of the regulation will look like. This affords developers time to get the fundamentals in place so they’re prepared to address the details once necessary. This will also put them in a better position once other countries come up with their own rules and, importantly, develop good practices and processes around AI protecting their clients and users. 

 

In order to prepare, companies, whether in Europe or the rest of the world, should establish certain practices in their development of AI systems:

  • Organizations developing AI need to familiarize themselves with the proposed EU rules now - and not just in their legal and compliance teams, but across functions. Once they take effect, these rules will also affect the work of developers, product managers, and others. Building awareness across the organization will already make it easier to plan for implementation, operationalize regulatory requirements, and integrate new rules into the development process down the line. 

  • Organizations should set up an easily updated inventory of what AI systems are being used or developed, by whom, and for what purpose. The inventory should be checked against the Commission’s list of prohibited and high-risk uses of AI. By maintaining an inventory of all applications used or developed within an organization, it will be easier to respond to amendments to these lists. For high-risk AI systems, documentation structures should be set up to keep track of, for example, intended uses, training and testing data, and performance.

  • Organizations should implement operational controls around the company’s high-risk systems and have a plan to roll out preventive controls over the coming years. 

  • As an additional step, it makes sense to already report findings and document processes - and continue to do so long-term - and make these reports available to regulators, buyers on the demand-side, providers on the supply-side, consumer associations and other civil society organizations. This is a practice that AI providers will benefit immensely from, as an internal process that should become second nature and also as a trust-building exercise that will convince customers and the general public that the business is mature enough to withstand an audit-like exam of its AI systems. 

 

What’s next?

After the European Commission has published its proposal, the ball is now officially in the court of the EU member states and the European Parliament, which will enter into negotiations after developing their positions. How long this will take is hard to predict, especially as there are starkly diverging views on many issues. We can be certain that intense lobbying efforts are already getting underway, not only from industry but also from civil society organizations. The latter are gearing up to address the significant shortcomings of the proposal, be it the potentially ineffectual bans, its relatively lax treatment of remote biometric identification or the risks of an oversight and enforcement system that largely relies on industry self-assessments. 

 

But whatever the negotiations will leave us with, this is the world’s first comprehensive attempt at regulating AI. We should acknowledge it as such - a step in the right direction that does not go far enough but advances the global discussion on how to govern AI with concrete proposals.

bottom of page