AEPL report "Governance of AI".

Published on 15/06/2020

Towards better European governance of Artificial Intelligence.

The European Union wishes to :

  • AI that puts people and citizens first;
  • Reliable technologies that you can trust;
  • Putting these technologies at the service of a democratic society, a dynamic and sustainable economy and the ecological transition.

AEPL fully supports these objectives. The following suggestions are intended to help achieve them.

We are basing these suggestions on three sources that are fundamental to the way the Union works.

  • First and foremost the Treaty on the Functioning of the European Union and, in particular, Articles 8 (elimination of inequalities), 9 (horizontal social clause), 10 (fight against discrimination), 11 (environmental protection), 12 (consumer protection), 15, 1 and 3 (principle of open governance), 16 (protection of personal data).
  • Secondly, the Charter of Fundamental Social Rights, in particular Articles 8, 21, 31 (fair and equitable working conditions), 37, 38 and 42.
  • Finally, the European Socle of Social Rights, in particular principle 10.

In addition, AEPL calls on the Commission to implement the EP's suggestions on AI, in particular the recommendations set out in the European Parliament document in Annex 1.

The EU's action is aimed at "accelerating the deployment of AI". This deployment obviously requires user confidence. These include mutatis mutandis the logic of the Machinery Directive (1989), drafted in the context of the creation of the internal market, in order to make the free movement of goods more reliable.

Thus, the "trustworthy" criterion motivates a regulatory initiative based on reliability requirements to control risks in order to protect consumers and data. The regulatory objective seems to target the major risks of "high-risk artificial intelligence systems", which call for "clear rules".

Controlling risks in such a way as to generate public confidence is not indifferent to the current pandemic context. The system under study is in fact part of an essential function of the State: the protection of citizens, the taking of decisions and the protection of the environment. treatment preventive marketing and use. Upstream, design; downstream, like the RGPDP, protection      2including fundamental public freedoms.

The pandemic is a tragic reminder that this protection and care can be equated with the regalian functions of the State to the point of justifying substantial restrictions on democratic freedoms. This is not the place to settle the debate on the relevance of this justification, but it is the place to point out that protection and care belong to the vertical plane of the res publicathe general interest and the values that have no price and that therefore have to be imposed on the horizontal plane of particular interestsi.

By imposing limits and rules on horizontal trade, strength belongs to the rule of law and not to the law of the strongest.

The democratic quality of decision-making and rule-making processes, the transparency of these processes, the independence of public authorities from vested interests, the consistency of actions with words, announcements and commitments - these are all prerequisites for building public trust.

It is in this spirit that the AEPL is sending you the following warnings and suggestions on the eve of the drafting of the provisions sought by the Union.

   1. Political courage: you can't please everyone

It will not be easy for the European authorities to consolidate the rule of law in the face of the economic de facto states desired by the markets and the AI oligarchies.

Indeed, economic forces on all sides are pressing for as swift a return to "business as usual" as possible, and even for the bracketing of public protections - especially environmental ones - in the name of growth imperatives. There are calls for the 2030 climate targets to be postponed, the Commission's Green Deal idea is under attack, a number of players in the digital sector are vilifying the RGPDP, industry is arguing for national rules to be relaxed, and so on.ii

In other words, a strong political commitment will be required to ensure that the system is in place, i.e. on the vertical axis of the general interest.

  1. Bringing the system into line with EU regulatory frameworks.

The system's position on the vertical axis of protection and care calls for the adoption of 'hard' standards - directive(s) - as opposed to 'soft' standards. soft law. Indeed, the effect of such 'soft' standards is to 'bring down' the system on the horizontal axis of individual interests, at best morally tempered by good practices voluntarily implemented under the banner of corporate social responsibility. The point here is not to question the honourability and importance of such practices; it is to highlight their shortcomings in terms of generalisation, enforceability, etc. 3 for the entire sector, and for sustainability. Compliance with good practice allows for partiality and sporadicity, which the reliability and trust expected by the Commission cannot allow.

Need we remind you of the deleterious effects of soft lawWhat about the social dialogue, paralysed by "voluntary agreements" when the social partners had the power to conclude genuine collective agreements? Or the disappointing results of the open method of coordination which has dashed the hopes opened up by the Luxembourg 'process' in terms of employment?

On the other hand, the budgetary obligations, accompanied by penalties, are as strong as ever.

The safety of AI products requires rules that are all the more robust given the tendency for safety rules to be relaxed, either in the standards governing operating licencesiiior in compliance with prevention requirements during productioniv. This complacency seems at odds with the growing environmental and safety demands of the general public.v. It illustrates the effectiveness of lobbies and the lack of foresight in a number of companies.

  1. The industry's strategy: mastering time.

The industry wants to retain control over the nature of innovations and the pace at which they are brought to market in a world that is increasingly questioning the purpose of innovations and their impact on the balance of ecosystems.

  1. The need to defend the precautionary principle.

In the strategic context described above, at the initiative of the European Risk Forumvithe industry has developed and is trying to convince public authorities to adopt a pseudo innovation principle (which it controls) that competes with reality precautionary principle the only one existing in law. This pseudo-principle of innovation is used to justify all sorts of delays, twists and more or less long-term exemptions from the legal application of the precautionary principle. The aim is obviously to gain at least de facto at best de jure a principle of innovation that would hold the precautionary principle and regulations in contempt. The AEPL does not accept such manoeuvres.

    5. Distinguish between science and technoscience.vii

In the same strategic vein, some sectors regularly confuse basic scientific research with technological innovation, and science with technoscience. In the name of this confusion, the latter should benefit from the same guarantees of freedom (academic if          4 It's only fundamental research. This overlooks the fact that fundamental research's free raison d'être - to advance knowledge - and its educational mission of public utility place it on the vertical plane of the general interest. On the other hand, technological innovation on the horizontal plane of commercial relations must be in line with the rules of the general interest.

The confusion is obviously fuelled by under-investment in university research (in Belgium since the late 1970s) and by industry taking over the financial reins.

  1. Learning from experience in other areas of technological innovation.

In particular, the agrochemicals sector reveals a very complete typology of time-saving, control and diversion tactics: delaying the implementation of regulations, etc.viiito cast doubt on inconvenient scientific studies, to discredit the authors of these studies, to circumvent the politicians who are called upon to make decisions, to finance servile research, etc.ixThe aim is to promote the company's own results, attract and control scientists from public institutions and universities, etc.

Even if, at the end of the day, the result is not what was expected for a particular authorisation, the time saved enables other productions to be developed which will also have plenty of time to take root before possibly being discarded in the more or less distant future.

The so-called "principle" of innovation thus takes on its full meaning, so that industry always has control over time. It is not interested in the long term, but in the short term. the succession of short terms the horizontal level of markets, of innovations that generate profits before possibly being rejected by the vertical level of the general interest. The principle of innovation simply has to precede that of precaution every time, which is logical if the two principles are placed on an equal footing. It is to this end that the industry wants innovation to be recognised. in principle. At that point, the rule of law is replaced by the rule of law, at least for the time needed to make a profit. AEPL cannot and does not want to be part of such a scenario.

In this respect, we would like to highlight the perverse effects of two EU procedures that should not be repeated in the field of AI.

PrimoIn agrochemicals, the "confirmation data procedure", which authorises marketing subject to the manufacturer's obligation to complete the product's safety documentation in the future.

Secondthe repeated trialogue procedures (Council, Commission, Parliament) on the same subject in the event of disagreement in the Council. These successive procedures only serve to gain time for the industry and to encourage the Commission, at each round of negotiations, to water down its proposed levels of protection in the hope of reaching an agreement in the Council.x. It is             5 Parliament.

Finally, the certification of products involving a certain level of risk cannot be entrusted to the manufacturer himself: certification of the product's conformity to the EU's essential safety requirements by a third party is required in this case. The level of risk, the qualifications of third-party certifiers and their independence must be the subject of a broad democratic debate.

"For low-risk AI applications, the Commission is considering a non-mandatory label scheme if they apply higher standards." This undoubtedly implies self-certification by producers. This principle of self-certification merits serious critical examination in the light of its application over the past thirty years.

  1. The requirements of independence and transparency.

The citizens of the Union expect the public authorities to be determined in their fight against lobbies, conflicts of interest and collusion.xiThe European authority must ensure the transparency and openness of the decision-making process. The European authority must ensure that decision-making processes are transparent and public. The invocation of business secrecy or intellectual property rights is a cheap way of ensuring opacity, particularly in the case of expert studies.xii. Public procedures verified by Parliament must guarantee the independence of the scientists responsible for the assessments.xiii.

For its part, on 7 March 2019, the Court of the European Union overturned a decision by the European Food Safety Authority (EFSA). The court ruled that confidential studies on the toxicity of glyphosate must be made public, considering that "the public interest in access to information" in environmental matters outweighs commercial interestsxiv. AEPL considers that the same rule applies in the field of AI.

Specifying its intentions with regard to AI, the Commission also states: "Artificial intelligence systems should be transparent and traceable, while guaranteeing human control. The authorities should be able to test and certify the data used by the algorithms. Unbiased data is needed to train high-risk systems to work properly and to ensure respect for fundamental rights, including non-discrimination."

The Commission specifies that its system does not concern military applications. However, AEPL points out that technology oligopolies make no such distinction. So much so that it was Google employees who recently held up the porosity between civil and military applications. This type of confusion must be taken into account when talking about transparency.

In addition, oligopolies commit astronomical and sometimes unsavoury resources to enforce the law of the strongest and exert vertical pressure on public authorities, which only have the means to do so.

of the Act to protect citizens.                                                                                 6

Over and above the issue of security and data protection, the system should also guarantee citizens' right to transparency in the operation of algorithms. A digital right to know should enable algorithms to be critically X-rayed, or audited as Dominique Cardon puts it.

  1. Breaking down silos and broadening the debate to include other stakeholders.

Notwithstanding the difficulties mentioned in the previous paragraphs, there has been a change in the balance of power, brought about in particular by the rise of civil society, which is demanding accountability, and the worrying decline in confidence in traditional institutions, including private companies.xvi.

The need to revise our lifestyles towards greater sustainability, discussed since the Rio Summit in 1992, has been at the heart of the concerns of a small but growing number of economic players, including many industrialists, who have decided to integrate the principles of sustainable development developed by the UN into their corporate strategy.xvii.

This action has taken a number of forms, as the principles cannot be directly transposed to players whose aim is to make a profit. One of these has been to investigate in depth the notion of responsible innovation, particularly following the EC's decision to incorporate so-called "Responsible Research and Innovation in the Horizon2020 programme. Based on this experience, the attached document Comments on the documents published on 19 February 2020  shows just how rich the debate around these major issues can be, and how different an understanding of complex realities can be when you call on experienced players in the field.


AEPL sees this debate on system safety as part of a wider reflection on the purpose of technological innovation in terms of its contribution to the well-being and progress of humanity. This progress must be indexed on a growth of being and not of having, in harmony with terrestrial and social interdependencies and therefore focused on the long term. AEPL therefore believes that this regulatory initiative should be part of a democratic process to determine the desirable nature of innovations.

The precautionary principle, combined with the principle of proportion, is certainly one of the keys to correctly approaching an innovation that we hope will be viable.xviii.

To this end, we call on the European authorities to work with a wide range of players on the ground, to build bridges between the different components of society, with businesses, governments, civil society, universities and investors who practise sustainable finance.

AEPL therefore urges the European authorities to mobilise AI tools to implement the Green       7 Deal and to repair social fractures. There are enormous needs in terms of developing people's skills, the circulation of knowledge, culture, care in all its forms, the development of public services and access to these services for all. If the European Union wants to increase its technological independence, it can do so by implementing a programme of joint projects and developing the tools needed to achieve this. ad hoc. In other words, designing the tools for shared intelligence.

It is on the basis of such a democratically defined project that the equally democratic questions of what data to capture, by whom, for what purposes, subject to what processing, for what contribution to the debate on societal choices, etc., arise. Such data and metadata would then be treated as common property.

June 2020

  • Alain Supiot, Governance by the numbers, 2015.
  • For example, the lobbying of French employers, Raphaëlle Besse Desmoulières, Jean-Michel Bezat, Cédric Pietralunga and Nabil Wakim, Climate: employers take action to influence standards, Le Monde22 April 2020.
  • For example, in France, Service Planète, Le Monde, 9 June 2018, or Stéphane Mandard, Lubrizol: weakened controls at high-risk sites, Id.5 October 2019.
  • For example, Stéphane Mandard, Lubrizol: a damning report for subcontractors, Le Monde23 October 2019.
  • On the subject of air pollution, for example, in 2018 the European Court of Auditors found that "the health of European citizens remains insufficiently protected". It recommended that the Commission adopt more stringent limits for air pollution. "strict (Le Monde, 12 September 2018).
  • In this case, the European Risk Forum (European Risk ForumERF), a lobbying platform for chemical, tobacco and fossil fuel companies.
  • For example, Jean-Marc Lévy-Leblond, There is no guarantee that a civilisation will maintain scientific activityInterview by David Larousserie, Le Monde18 March 20.
  • A delay such as that in the process supposed to regulate endocrine disruptors, which led to the condemnation by the European Court, discredits the Commission.
  • To see for yourself, take a look at the 25 discussions between the Commission and the Member States between 2013 and 2019 on bee-killing neonicotinoids. See, for example, Le Monde, 22 December 2018.
  • See, for example, the work of Corporate Europe Observatory, which denounces the collusion between lobbies and European decision-makers.
  • in particular the paper by David Demortain, a sociologist at INRA, part of the Interdisciplinary Sciences, Innovations and Societies Laboratory, Le Monde, 07 February 2018.
  • Need we remind you of the case of the German BfR institute which copied, often word for word, the 8 the registration application filed by the industry to evaluate glyphosate (cf. Le Monde16 January 2019)?
  • Stéphane Horel, Glyphosate: a victory for transparency, Le Monde10-11 March 2019.
  • the means used to defend glyphosate (Stéphane Foucart and Stéphane Horel, Monsanto has registered almost 1,500 people in Europe, Le Monde8-9 September 2019).

 ix See, for example, Stéphane Foucart, Troubled links between public research and agrochemicals, LM, 18 June 2018.

en_GBEnglish (UK)