A way out of the Brexit morass?
09 May 2019 – 14:15 | No Comment

Brexit-bound Britain will participate in this month’s European Parliament (EP) election, unless UK prime minister, Theresa May, and opposition leader, Jeremy Corbyn, manage to push the thrice-rejected EU withdrawal agreement through the House of Commons …

Read the full story »

Energy & Environment

Circular Economy

Climate Change


Home » Artificial Intelligence, Technology

‘Human in command’: the European blueprint for Artificial Intelligence

Submitted by on 24 Oct 2018 – 17:04

Humans can and should be in command of when, how and if artificial intelligence (AI) is used in our daily lives — what tasks we transfer to AI and how transparent it is — if it is to be an ethical player. Catelijne Muller, Member of the European Economic and Social Committee (EESC) and Rapporteur on Artificial Intelligence, advocates a “human-in-command” approach to AI

With its recent Artificial Intelligence (AI) initiative, the European Commission has charted a new course for the development of AI in Europe. The European Economic and Social Committee (EESC) fully subscribes to the Commission’s strategy. Its responsible, human-centric approach reflects the EESC’s own views on how to harness AI for the common good.

Back in 2017, when I set out to draw up the EESC’s first report on AI, I was effectively launching myself into uncharted territory. There was very little European policy work on this topic apart from a report by the European Parliament’s legal affairs committee, which focused on the legal issues surrounding AI.

The EESC chose to explore the broader societal impact of AI. We identified opportunities in healthcare, climate change, agriculture and the fight against poverty. But we also pointed to challenges: the impact on work, its ethical and safety implications, the explainability of the systems, the need for safety and quality standards and an education system that would prepare Europeans to deal with AI.

First and foremost, we asked Europe to take the lead at a global level in setting the framework for the responsible development and use of AI by developing a code of ethics and clarifying applicable laws and regulations. Europe is a huge market; its product standards and requirements cannot be ignored by players looking to export their products. The recent experience of the GDPR shows that Europe could leverage the power stemming from the sheer size of its market to steer the development of AI globally in the direction of its own principles and values.

The Commission’s strategy acknowledges the opportunities and challenges of AI and takes on board the EESC’s suggestions. It announces ethical guidelines to be set, bringing together all the efforts being made around the world to define ethical standards for AI. It also provides for an AI-on-demand platform and outlines plans to screen European laws and regulations for compatibility with AI and boost research and investment in AI for the benefit of humanity.

The Commission also started to explain how the liability legislation applies to AI. Some have argued in favour of granting smart systems legal personality, much like corporations. The EESC has come out against this, arguing that we have robust systems in place to deal with agents outside our control — liability laws that have been there for centuries. If a dog bites someone, we sue the owner, not the dog; if a child playing ball breaks a window, we do not sue the child but the parents. For the current AI systems even the more “basic” product liability regimes apply. But, more importantly, our liability laws have a preventive and corrective function. They also prevent us from doing harm. They prevent a car maker, for instance, from putting a car without brakes on the road. Imagine what could happen if we took away the threat of liability for damage caused by an AI system. What would stop a developer, in this AI race, from launching the system prematurely?

One of the main recommendations I made was to involve all stakeholders in discussions. The Commission has taken this on board both in the makeup of the High-Level Expert Group on AI (whose members, for example, include trade unionists, philosophers, businesses, ethicists, legal scholars and consumers) and the European AI Alliance, which is open for all stakeholders to join and contribute to the discussions. In order to ensure that those efforts at EU level are successfully transmitted to the national level, I decided, together with my two fellow Dutch members of the High-Level Expert Group, to set up the Dutch AI Alliance (ALLAI Netherlands). We aim to foster collaboration and cohesion both between member states and across the EU as a whole when it comes to the future of AI.

To conclude, I have been advocating a “human-in-command” approach to AI. Humans can and should be in command of when, how and if AI is used in our daily lives — what tasks we transfer to AI and how transparent it is — if it is to be an ethical player. After all, it is up to us to decide if we want certain jobs to be performed, care to be given or medical decisions to be made by AI, and if we want to accept AI that may jeopardise our safety, privacy or autonomy.

This technology does not have to overwhelm us.