New European Commission (EC) President Ursula von der Leyen looks like she could be instrumental in pushing a significantly changed regulatory environment for the development of artificial intelligence in Europe — marking a stark departure from the EC’s approach under predecessor Jean-Claude Juncker, who chose not to legislate on the area.
The former German Defense Minister Von der Leyen wrote in her agenda for Europe, “In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence,” reported politico.
These comments came after German Chancellor Angela Merkel said: “It will be the job of the next Commission to deliver something so that we have regulation similar to the General Data Protection Regulation that makes it clear that artificial intelligence serves humanity.”
Critical Decisions to Come Over Nature of AI Legislation
These comments from senior European politicians follow a report published in April by the European Commission’s High-Level Expert Group on AI (AI-HLEG), titled Ethics Guidelines for Trustworthy AI. The report gives early indications as to how the EC might legislate around AI from an ethical perspective. The EC should remember to tread carefully if it is going to turn the guidelines recommended in the report into regulation. There is a careful balance to strike between helping to encourage an environment in Europe that enables the continent to thrive as an engine room for innovation while ensuring technology develops in a direction that is compatible with the values of the EU and its member states.
The European business community wants to give itself the best chance possible to foster innovation in the AI space. Europe, having in the past two decades failed to produce many genuine challengers to Silicon Valley and its internet giants, must be careful of creating regulations that hamper AI-facing European companies coming to the fore.
The critical concern here is that regulation would damage the competitiveness of companies developing AI technology in Europe, and for potential end users to miss out on some of the benefits of the technology. If the European Commission is going to turn the recommendations made by the AI-HLEG into legislation, European companies could suffer from an overly punitive regulatory environment.
For instance, the guidelines around explainability and transparency made in chapter 2 look to us to be unworkable to implement, and IDC’s view is that the EC shouldn’t go as far as turning them into regulations as currently envisaged. They urge companies to introduce AI solutions where the decisions of the systems can be fully explained and are made totally transparent to the user.
The reality of AI technology today is that many types of AI software systems remain “black boxes,” meaning that it can be very challenging to develop a sophisticated understanding of what specifically may be influencing the AI’s decision and what aspects of a piece of information it is paying attention to when making a decision. There are promising technologies such as using topological data analysis that look like they could enable data scientists to better interpret AI-based decision making, but it is an area that will require considerable further investment. The European Union would do well to invest in further research around finding technological solutions to making AI more explainable.
Politico reported that sources familiar with the EC’s digital department suggest that it will recommend a regulatory framework that would set transparency obligations on automated decision-making systems. The Commission is also looking into requiring that AI systems be assessed to ensure they do not perpetuate discrimination or violate fundamental rights such as privacy. This type of legislation would push companies toward implementing AI systems more as a tool to support staff than as a system for fully automating processes, particularly where it is humans that AI systems would be making decisions about.
Legislation to this effect would potentially conflict with a large volume of AI systems that have already been deployed. For instance, chatbots that enable companies to automate the expensive customer service divisions of their businesses are already heavily prevalent in some consumer-facing applications and could constitute a form of automated decision-making AI that could potentially fall foul of the EU’s guidelines if they are not applied sensibly.
The EC is unlikely to introduce any legislation that would ban AI systems that have already been deployed. But this doesn’t mean its legislation couldn’t threaten the creation and adoption of future technology that may fall outside of any hard EC definitions of palatable AI systems, even if it isn’t truly posing any ethical dilemmas and makes clear business sense, as is the case with chatbots.
GDPR-Style AI Regulation May Prove Counterproductive
Instead of taking a general approach to legislating on AI that would cover all AI-related systems in a blanketed way similar to GDPR, the EC should look to treat AI as a domain-specific technology and look to regulate in that respect. Each AI application presents varying levels of ethical risk in relation to the industry it is trying to address and the nature of the application itself. For instance, an image recognition algorithm used by consumers to identify types of plants is inherently less problematic than one used by medical professionals to diagnose patients.
An AI equivalent to the GDPR legislation would mark a fundamental shift in the technology landscape in Europe. Businesses in the AI space need to pay attention to how this legislation is developing and eventually implemented. The EU’s eventual AI legislation is likely to be highly influential in the region but also globally, as regulators in other markets look for leadership on the issue.
IDC has assembled a special task force to investigate the landscape and business environment of ethics in relation to AI in Europe. The group will focus on developing a framework so organizations can avoid running into issues as a result of poor ethical practice, either as producers or consumers of AI-related technology. If your organization is interested in the findings of this research, please contact Jack Vernon (Senior Research Analyst), Neil Ward-Dutton (Vice President, AI, and DX European Research Practices), or Philip Carnelley (AVP, European Software Group).