top of page

Why AI regulation can foster innovation

by Maarten Stolk and Julia Reinhardt

December 19, 2022

original source: Deeploy blog

​

Providing guardrails to operationalize AI

Over the years, many of us have experienced the difficulties of operationalizing AI, with challenges ranging from technical integration to acceptance of stakeholders or legal bottlenecks. Fortunately, over the past few years, more and more (high impact) AI applications go to market, in industries like banking, insurance, healthcare, and (even) governments. Models are used to detect fraud, monitor transactions, determine risks, help with administrative tasks or assist in repetitive decision-making. 

The general public often has reservations about the use of AI in areas that have so far been dominated by human decisions, particularly in Europe. Fortunately, this attitude is shifting, slowly but surely, with more understanding of the impact, opportunities, and risks of AI. With the realization that AI is here to stay and that public authorities see its importance, the view on AI is maturing. Whereas a few years ago, public authorities like central banks were hesitant to approve high-risk AI applications, more recently judges have ruled in favor of using AI, considering that it is, in specific cases, already more reliable and less biased than humans.

Still, the majority of projects never see the dawn of light. What’s usually the reason for that, besides technological immaturity or lack of funding? Is overregulation (in Europe) the scapegoat, as many say? Opposed to what we often read and see, it’s in our opinion, not the killer regulation that blocks AI innovations (or would block them, once more ambitious regulation takes effect), but rather the lack of regulation and hence standardization that leads to a series of undeployed AI projects. Here’s why:

​

Why AI doesn’t go to market

Traditionally, most AI projects followed a few iterative steps, including data exploration, modeling, and the deployment of models. Each step comes with new challenges, like access and quality of data, the accuracy of predictions, or difficulties to integrate with legacy systems.

​

This has led to too much focus on model development, while not enough focus has been put on the (AI) system. Hence, while the model has high accuracy, it is often hard to integrate and doesn’t fit too well with the way of working of its end users. What commonly follows are long internal discussions, trying to convince stakeholders to use what was built and not always taking criticism seriously.

​

Furthermore, risks are often overlooked. Data Privacy Officers (DPOs) are not always involved right from the beginning, and mature software practices are skipped or postponed to a later stage. This means important aspects of DevOps, SecOps and MLOps like version control, documentation, monitoring, traceability of decisions, feedback loops, and explainability are not included in the first versions of an AI system or thought about in a too-late stadium when too much of the system has already been established.

​

Embrace the design phase to build successful AI applications

We tend to underestimate the value of the design phase in AI projects, i.e. a design of the infrastructure, but also a UX/UI design of the interface toward end users.

By applying design thinking, we push ourselves to think about the end product that we would like to realize. This way, we raise questions much earlier about, for instance, the comprehensibility of model outcomes, the traceability of decisions, alerts, monitoring, and processes. This approach also forces us to think about integration right away, thinking on a system level, compared to a model level previously.

It also means the length of development projects increases dramatically. The question is: Is that actually a problem? In our experience it isn’t, as the probability of success also increases dramatically. It is much more likely that both end users, as well as compliance and risk officers, accept a product when developers can prove to have reflected well about both integration as well as risks, and mitigated the latter. Or, to put it differently: Simply don’t start building whenever you can’t deal with the risks in the design phase.

​

The role of regulators

Process standardization can help us out, pushing us to properly design an AI system before it’s built. The European AI Act, currently in its final negotiation stage on EU level, covers a full spectrum of criteria that are often all important, and easy to overlook or underestimate, especially at an early product stage. They are, in our experience, useful guardrails for AI developers to help incorporate useful functions during the design phase of an AI system.

For example, the requirement for effective human oversight forces us to think about ways to give humans the appropriate controls to work with AI for high-risk use cases.

​

Other important requests that are covered by the AI Act include:

Risk management systems

Data and AI governance

Technical documentation

Record-keeping and reproducibility, while adhering to privacy standards

Transparency and provision of information to understand the system

Accuracy, robustness and data protection

 

To be honest, these future requirements should be part of any development process for high-impact AI systems. Making these mandatory now – for high-risk applications of AI only – encourage developers to do their due diligence, as they would with any pharmaceutical product. And we believe in the case of a product in the high-risk category, the processes are and should be similar.

 

The European AI Act actually encourages companies to also use these as voluntary steps for non-high-risk applications of AI. By applying these criteria voluntarily, the question of whether a system is high-risk or not – which at this very moment is the “to be or not to be”-question a lot of AI start-ups are pondering in Europe, but also those overseas who sell AI products to Europe or want to invest – can be neglected. More focus must (or needs to) instead be put on a standardized process that authorities and customers can easily check upon.

 

The draft European AI Act sketches out another future requirement for high-risk systems that has the potential to create an impact for the better: the obligation to register all stand-alone high-risk systems (as defined in Title II of the regulation) that are made available, placed on the market or put in service in one of the countries of the European Union in a new EU-wide database. This brings more visibility, and therefore the potential for scrutiny. Registered AI systems would be searchable by anyone interested in a centralized European Commission database. What looks like a mere bureaucratic step could promise interesting side-effects to the industry: The database as a place to show features of your system, to expose it to competitors and consumers alike, potentially to the media, if interest arises, and, above all, to researchers. The potential to receive feedback, also on potentially harmful consequences of a system that developers did not have the means to identify themselves. Companies should not fear the risk and try to avoid it but embrace increased visibility as a tool to leverage optimization of the AI system, also vis-à-vis investors and other stakeholders pushing for fast success. It might prove interesting, and definitely facilitate processes, to use this register on a voluntary basis also for AI systems of the lower-risk categories. In the end, this might be a tiny but potentially mighty tool for AI companies to contribute to trust-building, and for smaller AI builders to show off the seriousness of their processes despite their lack of a podium in the general public. On a larger level, this could be a building block to enable democratic oversight that some AI systems, due to their impact on society, should allow for. The AI Act in that sense follows the typical strategy of seemingly mere bureaucratic European Commission requirements to become game-changers through the back door, at a relatively low price but with potentially high gains for both (well-intent) industry and citizens.

 

It is true that terminology in the draft European AI Act can be at times confusing, and the determination of risk categories leads to sometimes unfair extra burden. But in our experience, it is important to stress that the intended goal of the regulation is good for the industry, especially for smaller AI companies, which are supported (as well). Is my investment future-proof? Am I running into shady areas with this application, and what steps do I have to insert into my product-development process to avoid pitfalls? These are questions small AI developing companies cannot postpone to a later stage, when funding is already spent or when high-profile legal teams (that they do not have, contrary to big tech) must figure out what to make of a potential non-compliance. And second, to show that it is worth spending more time and care on due diligence and the reassurance of users: consumers will put more trust in us, and the system has more chances of standing out positively on the market.

​

About the authors:

​

Maarten Stolk is co-founder of Deeploy and Enjins, with a focus on solutions to deploy AI responsibly in high impact cases like fintech and healthtech. Previously a data scientist when he noticed and experienced the struggles of getting AI to production first hand.

​

Julia Reinhardt is currently a non-profit Mercator senior fellow focusing on AI governance for small AI companies at AI Campus Berlin. She’s a certified privacy professional and a former German diplomat working at the intersection of global politics and technology.

bottom of page