An EU strategy for AI: turning constraints into competitive advantages!
Published on 04/03/2026THE EUROPEAN UNION'S IA STRATEGY: TRANSFORMING CONSTRAINTS INTO COMPETITIVE ADVANTAGES
Hedi Blili-Gouyou and Guy T'hooft
I. INTRODUCTION - THE EUROPEAN PARADOX
The dominant narrative on Europe's digital strategy has crystallised around an alarmist observation: Europe is irretrievably losing the "race to artificial intelligence". This rhetoric of predicted defeat is now shaping political debates and guiding budgetary decisions, fuelling a form of strategic fatalism. Faced with the American and Chinese ecosystems, the European Union would appear to be condemned to a subordinate role: that of a fussy regulator, incapable of generating its own technological champions, entangled in its own regulatory contradictions.
This note sets out to show that this diagnosis stems from a fundamental methodological error. It mechanically transposes to Europe success criteria forged elsewhere, without questioning their relevance or sustainability. The absence of European counterparts to OpenAI or Tencent is only a weakness if we implicitly accept that the oligopolistic concentration model represents the ultimate horizon for technological innovation.
Our central thesis turns this perspective on its head The structural characteristics of the European ecosystem - institutional fragmentation, high standards, priority given to fundamental rights - are not temporary handicaps to be overcome, but the foundations of an alternative economic model that is potentially more resilient and more profitable in the long term. Ethics are not an external brake on innovation, but an infrastructure of trust that can become a sustainable competitive advantage.[1].
This hypothesis is based on a systemic analysis of four presumed 'weaknesses' in European strategy: the absence of industrial champions, the complexity of the AI Act, the ambiguity of the 'third way', and critical technological dependencies. For each, we will show how a fresh strategic reading can identify transformative levers for action.
The stakes go far beyond economic competition. It involves Europe's ability to embody a form of technological power that does not renounce the civilisational achievements of liberal constitutionalism.[2]. No other geopolitical area bears this responsibility - or has the historical legitimacy to do so. The question is therefore not to choose between innovation and fundamental rights, but to prove empirically that one cannot exist in the long term without the other.
II. THE ABSENCE OF INDUSTRIAL CHAMPIONS: RETHINKING THE POWER MODEL
A. The classic complaint: a techno-nationalist interpretation of competitiveness
The diagnosis of the failure of the European strategy is based on a triptych of seemingly implacable arguments. Firstly, the absence of technological giants comparable to OpenAI, Google DeepMind or Anthropic points to a structural inability to mobilise the resources needed for disruptive scientific breakthroughs. Secondly, the fragmentation of the market into twenty-seven national ecosystems would prevent the emergence of the economies of scale that are essential for driving competitive foundation models. Thirdly, the chronic under-capitalisation of European start-ups - which raise on average four times less than their American counterparts at the Series B stage - would condemn European innovation to a form of congenital dwarfism.
This approach, however widespread it may be in decision-making circles, suffers from a fatal flaw: it naturalises a model of technological power - oligopolistic concentration - without questioning its hidden costs or sustainability. As the report by the European Court of Auditors (2024) points out[3]In the same vein, "performance assessment cannot be limited to quantitative indicators of market capitalisation, at the risk of missing out on the qualitative transformations of the innovation ecosystem".
B. The strategic counter-reading: monopoly vulnerabilities and distributed resilience
- The systemic fragility of concentration
The current architecture of the global digital infrastructure is based on a dangerous paradox: almost total dependence on a small number of private players for functions of vital importance. The outage of Amazon Web Services on 7 December 2021, which lasted less than six hours, caused global economic losses estimated at €3.5 billion and paralysed essential services - from public health to air transport. This vulnerability is not cyclical but structural: it is a direct result of the concentration model that Europe is supposed to reproduce.
Conversely, a distributed ecosystem - precisely what European fragmentation spontaneously produces - generates a form of systemic resilience. The multiplication of innovation points, far from being a waste of resources, functions as a strategic redundancy. In a geopolitical context marked by increasing risks of disruption (cyber attacks, trade tensions, energy crises), this decentralised architecture represents an undervalued sovereignty asset.
- Vertical excellence as an alternative strategy
The case of ASML, a Dutch company with a virtual world monopoly on extreme ultraviolet (EUV) lithography, empirically invalidates the 'generalist champion' thesis. The fruit of twenty-five years of patient investment - during which the company made no profit - ASML illustrates a radically different innovation trajectory to the Silicon Valley model. Its market power comes not from network effects or aggressive acquisition strategies, but from in-depth technological mastery in an ultra-specialised segment. And this approach is precisely what Europe's comparative advantages are all about: scientific excellence, cooperation between industry and research, and the capacity to invest for the very long term.
The European AI ecosystem already has this sectoral morphology: Mistral AI (sovereignty and open models), DeepL (multilingual language processing), Siemens and SAP (industrial and enterprise AI). Rather than lamenting the absence of a European Google, the strategy should aim to consolidate these vertical leadership positions, while accepting that they do not generate the same media visibility as generalist unicorns.
- Patient capital" as a competitive weapon
The model of the German Mittelstand - family businesses with a multi-generational time horizon, investing massively in R&D without pressure for quarterly returns - offers a precedent for thinking about an AI economy that escapes the logic of rapid "exit". The European Commission, in its Action Plan for an AI Continent (2024-2025)[4]implicitly recognises this specificity by calling for "funding mechanisms adapted to the long cycles of technological maturation". However, this call remains largely programmatic.
C. Operational recommendations
Proposition 1 Create a European "Long-Term AI" investment fund, endowed with 15 billion euros over fifteen years (i.e. 1 billion euros per year), with an explicit clause prohibiting requirements for a return on investment before ten years.
This amount represents an annual investment equivalent to that currently devoted by the EU via Horizon Europe and the Digital Europe programme (approximately €1 billion per year according to the European Commission, 2024, etc.).[5]). However, unlike existing programmes which finance 3-5 year projects, this fund would focus exclusively on 10-15 year horizons, enabling breakthroughs in science-intensive segments where Europe can aim for world excellence: explainable AI, neuromorphic computing, optimisation under constraints. This amount is also consistent with the Coordinated Plan's objective of mobilising €20 billion a year (public + private) between now and 2030.[6] The Long-Term AI fund would contribute 5% of this objective, focusing on very long-term fundamental research.
Proposition 2 Refocus the criteria for valuing European innovation. Replace unicorn rankings - which essentially measure the ability to raise funds - with sectoral technological leadership indicators: key patents, technical standards adopted, market share in high added-value segments.
III. THE AI ACT: FROM BUREAUCRACY TO A REGULATORY WEAPON
A. The classic complaint: regulatory paralysis
The four hundred pages of the AI Act crystallise all the criticisms levelled at the "European model": Kafkaesque bureaucracy, ignorance of technical realities, unbearable extra costs for start-ups. These criticisms, amplified by the American industrial lobbies and complacently relayed by certain European analysts, build up the image of punitive regulation, designed to compensate for Europe's inability to innovate by fussy control of the innovation of others.
This representation deliberately ignores two major historical precedents. On the one hand, the same arguments were mobilised against the RGPD in 2016-2018: it was supposed to "kill the European digital economy", cause "the exodus of start-ups", and enshrine "the definitive domination of the GAFAMs". Seven years on, the RGPD has become a de facto global standard, generating a European privacy tech industry valued at €2.5 billion and forcing the American giants to make structural changes to their business models. On the other hand, the history of the European economy shows that strong standards have historically been a driver of competitiveness - from the metric system to ISO standards, not forgetting car safety standards.
B. The strategic counter-reading: the "Brussels Effect" as a power strategy
- The RGPD effect: regulation as market infrastructure
The RGPD illustrates a mechanism of normative power that the political scientist Anu Bradford has theorised under the expression "Brussels Effect": the European Union's ability to unilaterally export its regulatory standards, transforming its internal norms into quasi global constraints. This phenomenon is based neither on military coercion nor on economic domination, but on three structural factors: the size of the European market (450 million consumers), the effect of non-divisibility (impossible for multinationals to maintain differentiated standards by jurisdiction beyond a certain threshold of complexity), and strategic anticipation by private players who prefer to adopt the most demanding standard in advance.
The AI Act has all the characteristics needed to reproduce this effect. As the Internet Policy Review (2025) notes[7]The first empirical signs confirm this trend: several American states (California, New York) are studying legislation directly inspired by the AI Act, while governments in South-East Asia are seeking technical expertise from the European Commission. The first empirical signals confirm this dynamic: several American states (California, New York) are studying legislation directly inspired by the AI Act, while governments in South-East Asia are seeking the Commission's technical expertise to develop their own regulatory frameworks.
- Compliance as a barrier to entry and a competitive moat
The standard economic analysis of regulations presents them as dead costs, reducing margins and holding back innovation. This view systematically overlooks their function as a barrier to entry. A demanding regulatory framework penalises opportunistic players - whose business model is based on outsourcing risks - more than established players capable of internalising the costs of compliance.
A study by the IAPP (International Association of Privacy Professionals, 2024)[8] reveals that 67% of organisations that have integrated privacy governance into their AI strategy say they are confident about their AI Act complianceThis is a sign of an emerging competitive advantage for companies that have anticipated the regulatory requirements. This "trust premium" is becoming increasingly apparent in B2B tenders, where certification is becoming a decisive selection criterion.
On a more structural level, European certification is gradually becoming a passport for access to public contracts - worth €500 billion a year in the EU. Public tenders are increasingly systematically incorporating AI Act compliance clauses, creating a de facto captive market for European players or multinationals that have invested in compliance.
- The hidden cost of non-regulation: the collapse of trust
The Meta/Cambridge Analytica case offers an instructive counter-factual. Between March and July 2018, the company lost up to 134 billion dollars[9] in market capitalisation at the peak of the crisis - not because of regulatory sanctions, but because of a loss of confidence on the part of advertisers and users. Recurring scandals linked to algorithmic biases (discriminatory recruitment systems, racist facial recognition, toxic chatbots) generate reputational costs that far exceed the investment required for preventive regulatory compliance.
The AI Act thus functions as collective insurance against the risk of a systemic collapse of trust. In regulated sectors with high stakes - health, justice, finance, security - the absence of a robust regulatory framework does not produce unbridled innovation, but institutional timidity. Hospitals, banks and public administrations will only adopt technologies on a massive scale if they are certified and auditable. Far from hindering the deployment of AI in these sectors, the European regulatory framework is a precondition for it.
C. Operational recommendations
Proposal 3 Transform the "Trustworthy AI" label into a European ISO standard, negotiated as a technical standard in international bodies (ISO, ITU). Mobilise European economic diplomacy to impose this standard as a prerequisite in free trade agreements.
Proposal 4 Create a one-stop compliance shop for SMEs, with a budget of 500 million over five years (i.e. 100 million euros per year).
This amount represents approximately 0.5% of the total GenAI4EU budget (€700 million according to the Commission, 2024-2025).[10]), but dedicated exclusively to helping SMEs achieve compliance. By way of comparison, the EIC Accelerator programme allocates up to €2.5 million per start-up for technological innovation; the one-stop shop would make it possible to support around 200 SMEs a year with grants of €500,000, covering auditing, certification, staff training and systems adaptation. The aim is not just to facilitate compliance, but to build a European AI audit and certification industry - an industry that can then be exported to jurisdictions adopting similar frameworks.
Proposal 5 Launch aggressive "standards diplomacy", making access to the European AI market (for non-European companies) conditional on regulatory reciprocity clauses. This strategy - already successfully employed for environmental standards - would accelerate the international dissemination of European standards.
IV. THE "THIRD WAY": SELF-FULFILLING PROPHECY OR STRATEGIC IMPASSE?
A. The classic complaint: the illusion of a credible alternative
The official rhetoric of the European Union presents its AI strategy as a "third way" between American surveillance capitalism and Chinese digital authoritarianism. This formulation appeals to European political circles because it transforms a position of objective weakness - the absence of technological champions - into a distinctive ethical stance. However, strategic analysts are increasingly sceptical.
Critics are converging on the same diagnosis: this "third way" runs the risk of being nothing more than an "ethical museum" - an area of harmless virtue, producing standards without being able to enforce them, principles without the capacity to project them. Faced with massive American investment (the private sector has invested $67 billion in 2023) and Chinese strategic management (a national AI plan worth $150 billion over ten years), Europe would appear to be condemned to a role of moral commentator on transformations over which it has no control.
B. Strategic counter-reading: the emergence of a trust market
- The underestimated scale of the demand for regulation
Eurobarometer 2024 reveals that 73% of European citizens reject the use of unregulated AI systems[11] in sensitive areas (health, justice, employment). This figure expresses not just an abstract cultural preference, but a real economic constraint: in liberal democracies, no technology can be deployed on a massive scale unless it is socially acceptable. But this constraint is not confined to Europe. The repeated scandals in the United States - from the racist facial recognition of Rekognition (Amazon) to the dangerous hallucinations of medical assistants - are generating a growing demand for regulation, including among the technological elites.
More structurally, the most dynamic economic sectors with the highest added value - precision healthcare, algorithmic finance, predictive legal systems - are precisely those where the need for regulatory compliance is greatest. In these areas, competitive advantage is not built on raw computing power or the size of datasets, but on the ability to produce systems that can be audited, explained and certified. And these attributes correspond exactly to the priorities of European research over the past fifteen years - from explainability (XAI) to formal certification, via frugal AI.
- The 'second mover' advantage: learning from the failures of others
Strategic theory classically distinguishes the advantages of the "first mover" (capturing market share, defining standards) from those of the "second mover" (observing the mistakes of the pioneer, optimising processes). In the field of AI, Europe structurally occupies this position of second mover - not by strategic choice, but by objective lag. Rather than deploring this situation, the strategy is to take advantage of it.
The massive deployment of AI systems in the United States and China has produced an empirical body of failures from which Europe can learn: structural discriminatory biases, authoritarian drifts, security vulnerabilities, accelerated obsolescence of skills, concentration of power. European AI solutions - precisely because they incorporate ethical, security and explicability constraints right from the design stage - avoid some of these pitfalls. This qualitative difference translates into tangible competitive advantages: medical AI systems certified in Europe penetrate markets (Japan, Singapore, Canada) where unregulated American solutions come up against regulatory barriers.
- Sovereignty through interoperability: open standards versus walled gardens
The dominant contemporary AI model is based on closed proprietary ecosystems (iOS/Android, AWS/Azure/GCP, GPT/Claude/Gemini), generating massive 'lock-in' effects. This architecture produces a form of geopolitical dependency: adopting a player's ecosystem also means accepting the jurisdiction of its country of origin and the risks of access being cut off unilaterally.
Precisely because it does not control any dominant ecosystem, Europe has an objective interest in promoting open standards and interoperability protocols. This strategy is finding increasing support from governments seeking to avoid exclusive dependence on Sino-American technologies. The strategic partnerships that Europe is forging with medium-sized powers (ASEAN, African Union, Latin America) are not based on the supply of foundation models - an area in which it cannot compete - but on the transfer of regulatory and technical capabilities enabling these countries to build their own sovereign ecosystems.
C. Operational recommendations
Proposition 6 Launch a research programme for 3 billion euros over five years (i.e. 600 million euros per year) specifically dedicated to explainable and auditable AI.
This amount represents a 40-fold increase in the current European effort on AI transparency and reliability. Indeed, Horizon Europe has allocated €112 million for AI and quantum in 2024, of which only €15 million for transparency and reliability (European Commission, 2024). The €600 million-a-year programme would enable what today appears to be a regulatory constraint to be transformed into a disruptive technological advantage: developing architectures that natively enable traceability, interpretability and formal certification. By way of comparison, this investment is still less than the GenAI4EU annual budget (€700 million), but it focuses on a technological segment in which Europe can aim for global excellence rather than compete head-on with the American foundation models.
Proposition 7 Building a strategy of partnerships with the Global South, not on the model of development aid, but as an alliance of mutual interests. Europe offers its regulatory expertise and certified technologies; its partners offer fast-growing markets and diplomatic support for the adoption of European standards in international forums.
V. STRATEGIC DEPENDENCIES: THE ACHILLES HEEL THAT HAS BECOME A MOBILISING EMERGENCY
A. The brutal facts: anatomy of a systemic vulnerability
The report by the European Court of Auditors (2024) makes an unequivocal diagnosis: Europe's digital infrastructure is critically dependent on non-European players in three key areas. Firstly, cloud computing: 70% of storage and computing capacity is available in Europe.[12] used in Europe come from three American suppliers (AWS, Microsoft Azure, Google Cloud Platform). Secondly, semiconductors: 90% of the world's production of advanced chips (smaller than 7 nanometres) is concentrated in Taiwan and South Korea. Thirdly, foundation models: the entire European generative AI ecosystem depends on models developed by OpenAI, Anthropic, Google and Meta.
This triple dependence is not just a matter of economic vulnerability - it constitutes a geopolitical risk of the highest order. The semiconductor crisis of 2021, triggered by logistical disruptions linked to COVID-19, paralysed the European automotive industry for eighteen months, destroying €110 billion in added value. A military conflict in the Taiwan Strait, a unilateral decision by Washington to ban access to AI technologies for national security reasons, or a massive cyber attack on US data centres would have even more serious systemic effects.
The French Court of Auditors, in its report on the national AI strategy (2025), points out that "technological dependence also generates normative dependence: systems designed according to non-European legal logics incorporate biases and priorities that run counter to European values". This observation points to a dimension that is often overlooked: over and above material vulnerability, technological dependence erodes Europe's ability to define its own civilisational priorities in a sovereign manner.
B. The window of opportunity: transforming constraint into mobilisation
- The post-Ukraine geopolitical awakening: from rhetoric to investment
Russia's invasion of Ukraine in February 2022 produced a strategic shock comparable, in technological terms, to that of Sputnik for the United States in 1957. It brutally revealed the fragility of European supply chains and the illusion of peaceful interdependence. This shock has triggered a significant reorientation of the budget: the EuroHPC (supercomputer) programme has seen a substantial increase in its budget; the Gaia-X sovereign cloud project, moribund in 2021, has been relaunched with substantial industrial commitments.
More significantly, the European Chips Act (2023) will mobilise €43 billion.[13] to reduce Europe's dependence on semiconductors, with the aim of increasing world production from 10% to 20% by 2030. The initiative InvestAIannounced in February 2025 at the Paris Summit, marks a major qualitative breakthrough: mobilise 200 billion euros[14] for AIof which 20 billion specifically earmarked for 4-5 gigafactories[15] AI each equipped with 100,000 latest-generation chips, i.e. four times the capacity of current infrastructures.
The President of the Commission, Ursula von der Leyen, compared this project to a "CERN for AIThe aim is to create an open infrastructure that will give all European scientists and companies - not just the giants - access to the resources they need to develop cutting-edge models.
Budgetary context : According to the Coordinated AI Plan (2021), the objective was to achieve 20 billion euros a year of combined investment (public and private) between now and 2030. Until the launch of InvestAI, the Commission was investing around 1 billion euros per year via Horizon Europe and the Digital Europe programme. OECD-Commission estimates (2023) show that the EU had already achieved around 25.7 billion euros in annual investment[16] in 2023, exceeding the 2030 target by seven years. InvestAI aims to multiply this effort by 10 over the next five years.
European economic history shows that major technological leaps are often the result of prior humiliations. Airbus was born of the realisation in the 1960s that total dependence on Boeing was an unacceptable vulnerability. Fifty years and €1,000 billion of public and private investment later, Airbus holds 50% of the world civil aviation market. This precedent shows that a long-term European industrial strategy, adequately resourced and politically supported, can produce world champions - provided we accept time horizons that are incompatible with electoral cycles.
- Differentiating technological bets: selective sovereignty
The natural temptation, in the face of identified dependencies, is to aim for total self-sufficiency - an ambition that is as illusory as it is ineffective. No economy, not even Chinese or American, masters the entire technological value chain. The relevant strategy is one of "selective sovereignty": identifying three or four critical technological segments in which Europe can reasonably aim for global excellence, and accepting dependence in the other areas, managing it by diversifying suppliers.
Three technological bets seem particularly promising. Firstly, frugal AI and edge computing: in the face of the energy crisis and climate constraints, the ability to train and deploy high-performance models with limited computational resources is becoming a major competitive advantage. European research in this area (notably the PRAIRIE Institute in Paris and the ELLIS Network) is at the forefront of the world. Secondly, quantum computing: the technological race is still on, and Europe has considerable scientific assets (40% of world publications). Thirdly, specialised semiconductors for AI: rather than trying to catch up with Taiwan on generalist chips, Europe can aim for excellence on specific architectures (neuromorphic computing, processors dedicated to explainable AI).
- Strategic alliances: diversifying to reduce dependency
Reducing dependency involves not only relocation, but also the geographical diversification of partners. It is in Europe's interest to forge technological alliances with medium-sized powers that share its concerns about sovereignty: Japan (semi-conductors, robotics), South Korea (electronics), Israel (cybersecurity) and Canada (ethical AI). These partnerships enable us to pool R&D costs, gain access to complementary skills and reduce our bilateral dependence on the United States or China.
The CERN (European Organisation for Nuclear Research) model offers an institutional precedent: a collectively funded fundamental research infrastructure, operating over multi-decade timeframes, and having generated massive economic spin-offs (the web itself was invented at CERN). The InvestAI, explicitly compared to a "CERN for AI".The aim is to create a shared, open and collaborative infrastructure that will give the entire European ecosystem - researchers, start-ups, SMEs and large companies - access to the computational resources they need to develop cutting-edge AI models.
C. Operational recommendations
Proposal 8 Identify formally three technologies critical to European AI sovereignty (e.g. quantum computing, frugal AI, neuromorphic semiconductors) and y concentrate 70% of public investment in R&D IA.
Justification The Coordinated Plan is targeting €20 billion a year in combined investment between now and 2030, including around €7 billion from European public sources (Commission + Member States). Concentrating 70% of this public envelope (i.e. around €5 billion per year) on 3-4 critical technologies would make it possible to achieve sufficient critical mass to aim for global excellence in these segments, rather than dispersing resources across the entire technological spectrum. This strategic focus breaks with the current dispersal of resources and is inspired by the Japanese model of sector concentration.
Proposal 9 Negotiate bilateral technology partnerships with Japan and South Korea, explicitly aimed at reducing mutual dependence on the USA and China. These partnerships should include technology transfer and co-development clauses, not just trade agreements.
Proposal 10 : Consolidating the initiative InvestAI as a permanent infrastructure of European AI sovereignty, based on the CERN model.
InvestAI is already mobilising €200 billion (€50 billion from the EU public sector + €150 billion from the private sector via European AI Champions), including €20 billion specifically for 4-5 gigafactories. This initiative should become a permanent structure - a "European AI Infrastructure Corporation" - bringing together the Member States, the EIB and industrial partners. Its mission: to build and operate the strategic computing infrastructures and datasets required for European sovereignty, while making them available to the research ecosystem and start-ups. The governance model should be inspired by CERN (annual budget of €1.3 billion, funded by 23 Member States over the past 70 years): collective funding, multi-decade horizon, open access for the entire European scientific and industrial community.
VI. CONCLUSION - THE ENFORCEMENT IMPERATIVE
Summary: from constraint to advantage
This note has shown that the four structural 'weaknesses' of the European strategy - absence of champions, regulatory complexity, ambiguity of the third way, technological dependence - are the result of a mistaken diagnosis. They are handicaps only in relation to a model of technological power - American oligopolistic concentration - whose economic, social and democratic sustainability is increasingly contested.
The European distributed ecosystem is generating systemic resilience in the face of shocks. Far from paralysing innovation, the AI Act is building an infrastructure of trust that can become a sustainable competitive advantage, via the "Brussels Effect". The "third way" corresponds to a growing global demand for technologies that comply with democratic standards. Finally, strategic dependencies have triggered unprecedented budgetary and political mobilisation - illustrated by InvestAI and its €200 billion - opening up the possibility of technological leaps in high added-value niches.
Ethics are not an external brake on innovation, but an infrastructure for competitiveness. In high added-value sectors such as health, finance, justice and security, the ability to produce systems that can be audited, explained and certified is a sine qua non for deployment. And these attributes are precisely what European research has been focusing on for the past fifteen years.
The fatal risk: indecision
The danger is not the European model itself, but our collective inability to fully embrace it. For twenty years, Europe's digital strategy has oscillated between two contradictory temptations: mimicking the American model ("creating unicorns") and asserting its difference ("ethics first"), without ever really choosing. This strategic indecision produces the worst of both worlds: neither the financial clout of the US, nor the consistency of standards needed to project the European model.
The choice is not between copying others or making our own way - that's a false dilemma. What is urgently needed is to move from a regulatory framework, now established with the AI Act, to coordinated industrial action. This implies three breaks. Firstly, accepting massive public investment in strategic infrastructures - InvestAI is a case in point - and assuming that technological sovereignty has a cost, albeit less than the cost of dependence. Secondly, impose strategic discipline: concentrate resources on three or four technological bets (70% of public R&D), instead of scattering budgets over the whole spectrum. Thirdly, build an aggressive standards diplomacy, transforming the AI Act into a weapon of commercial conquest rather than a self-inflicted handicap.
Resolving the apparent tension: open standards and concentrated sovereignty
This strategy may seem paradoxical: on the one hand, promoting interoperability and open standards (Proposal 7); on the other, massively concentrating investment on a few critical technologies (Proposals 8-10). But in reality, these two axes are complementary rather than contradictory.
Open standards and interoperability are our geopolitical offering This is what Europe is offering the rest of the world to avoid the Sino-American walled garden. This is our comparative advantage in technological diplomacy. By promoting open protocols, interoperable architectures and shared datasets, Europe is positioning itself as a credible alternative for all players - governments, businesses, researchers - seeking to avoid exclusive dependence on proprietary American or Chinese ecosystems.
Conversely, the concentration of investment in 3-4 critical technologies is a matter of selective sovereignty. Identify the segments where dependence would be strategically unacceptable (quantum computing, specialised semiconductors, frugal AI, explainable AI) and build real autonomy. It's not a question of total self-sufficiency - a costly and ineffective pipe dream - but of mastering the technologies that determine our ability to define our own rules of the game.
The key is that these sovereign technologies must themselves respect our own standards of openness.. In other words : sovereignty in capacities, openness in protocols. ASML, our paradigmatic example, is a perfect illustration of this synthesis: technological monopoly (sovereignty) in an open, international ecosystem (interoperability). Similarly, InvestAI aims to create European gigafactories (computational sovereignty) while guaranteeing open access to the entire scientific and industrial ecosystem (open standards).
This dialectic between strategic concentration and systemic openness is not a contradiction, but our unique value proposition: offering the world an alternative to the dominant closed models, while guaranteeing our autonomy in critical segments. It is precisely this synthesis that can transform Europe's "third way" from a rhetorical aspiration into a geopolitical reality.
The civilisational challenge: historical responsibility
Beyond economic competition, Europe's AI strategy raises a fundamental question of political philosophy: can a technologically advanced society sustainably preserve the achievements of liberal constitutionalism - the rule of law, separation of powers, protection of minorities, individual autonomy? Or does technological progress necessarily imply, as some authoritarian theorists maintain, a weakening of democratic constraints in the name of efficiency?
Europe alone bears the burden of proving empirically that the first option is viable. Neither the United States - where AI regulation is largely left to corporate self-regulation - nor China - where AI explicitly serves social control objectives - can embody this synthesis between technological innovation and fundamental rights. This responsibility stems directly from European history: it was in Europe that individual freedoms (habeas corpus, freedom of expression) and the industrial revolution were invented simultaneously. It was in Europe in the twentieth century that the gamble of democratic regulation of economic power was taken. It was in Europe that the institutions of liberal constitutionalism survived the totalitarian catastrophes.
This historical legitimacy gives rise to a strategic obligation: to demonstrate that ethics and innovation are not antagonistic, but mutually constitutive. Europe's failure in AI would not just be an economic defeat - it would signal the impossibility of a technological modernity that respects human rights, thereby validating authoritarian theses on the incompatibility between democracy and technological efficiency.
So the final question is not a technical one, but a political one. Does the European Union have the collective will to transform these potential assets into real power? Does it have the strategic discipline to stay the course over the next twenty years, regardless of electoral changes and tensions between Member States? Can it overcome the temptation to turn inwards to build the common infrastructures that are essential to continental sovereignty?
These are not issues for forward-looking analysis - they call for immediate political decisions. The time for strategic thinking is over. Now is the time for execution. History will judge Europe not on the quality of its principles, but on its ability to embody them in durable technological institutions. Our generation bears responsibility for this verdict.
BIBLIOGRAPHY
European Commission (2025). A European approach to artificial intelligence. Directorate-General for Communication Networks, Content and Technologies. https://digital-strategy.ec.europa.eu/fr/policies/european-approach-artificial-intelligence
European Commission (2024-2025). Action plan for an AI continent. https://france.representation.ec.europa.eu/informations/intelligence-artificielle-la-commission-propose-un-nouveau-plan-daction-pour-renforcer-son-2025-04-09_fr
European Commission (2025). GenAI4EU: Funding opportunities to boost Generative AI "made in Europe. https://digital-strategy.ec.europa.eu/en/policies/genai4eu
European Commission (2024). New Horizon Europe Funding Boosts European Research in AI and Quantum Technologies. https://digital-strategy.ec.europa.eu/en/news/new-horizon-europe-funding-boosts-european-research-ai-and-quantum-technologies
European Commission (2025). EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence. https://digital-strategy.ec.europa.eu/en/news/eu-launches-investai-initiative-mobilise-eu200-billion-investment-artificial-intelligence
European Court of Auditors (2024). Special report on artificial intelligence in the EU. https://www.eca.europa.eu/fr/publications/sr-2024-08
French Court of Auditors (2025). The national strategy for artificial intelligence: consolidating successes. https://www.ccomptes.fr/fr/publications/la-strategie-nationale-pour-lintelligence-artificielle-consolider-les-succes-de-la
IAPP - International Association of Privacy Professionals (2024). AI Governance and Regulatory Confidence Survey.
Internet Policy Review (2025). "Brussels Effect or Experimentalism? Understanding EU AI Regulation." Journal of European Public Policy14(2). https://policyreview.info/articles/analysis/brussels-effect-or-experimentalism
OECD (2025). Progress in Implementing the European Union Coordinated Plan on Artificial Intelligence (Volume 1): Member States' Actions. https://www.oecd.org/en/publications/progress-in-implementing-the-european-union-coordinated-plan-on-artificial-intelligence-volume-1_533c355d-en.html
OECD & European Commission (2025). Advancing the measurement of investments in artificial intelligence. https://oecd.ai/en/wonk/measuring-ai-investment-new-oecd-ec-methodology
[1] Bradford, A. (2020), The Brussels Effect: How the European Union Rules the WorldOxford University Press.
[2] Acemoglu, D. and Johnson, S. (2023), Power and Progress: Our Thousand-Year Struggle Over Technology and ProsperityPublic Affairs.
[3] European Court of Auditors (2024), Special report on artificial intelligence in the EULuxembourg. https://www.eca.europa.eu/fr/publications/sr-2024-08
[4] European Commission (2024-2025), Action plan for an AI continent. https://france.representation.ec.europa.eu/informations/intelligence-artificielle-la-commission-propose-un-nouveau-plan-daction-pour-renforcer-son-2025-04-09_fr
[5] European Commission (2025), A European approach to artificial intelligence. https://digital-strategy.ec.europa.eu/fr/policies/european-approach-artificial-intelligence
[6] OECD (2024), OECD AI Principles: Turning from Aspiration to ActionOECD Digital Economy Papers. https://www.oecd.org/digital/artificial-intelligence/
[7] Internet Policy Review (2025), "Brussels Effect or Experimentalism? Journal of European Public Policyvol. 14, no. 2. https://doaj.org/article/c45f5940910c487dab59787b2a907062
[8] IAPP (2024), AI Governance and Regulatory Confidence Survey.
[9] Meta/Facebook public financial data, March-July 2018.
[10] European Commission (2025), GenAI4EU: Funding opportunities. https://digital-strategy.ec.europa.eu/en/policies/genai4eu
[11] Eurobarometer 2024, European Commission data.
[12] IT for Business, "Digital sovereignty: cloud, AI agents and dependencies". https://www.itforbusiness.fr/souverainete-numerique-cloud-agents-ia-et-dependances-99757
[13] European Chips Act (2023), European Commission.
[14] European Commission (2025), EU launches InvestAI initiative. https://digital-strategy.ec.europa.eu/en/news/eu-launches-investai-initiative-mobilise-eu200-billion-investment-artificial-intelligence
[15] European Commission (2025), InvestAI announcement, Paris Summit.
[16] OECD (2025), Progress in Implementing the European Union Coordinated Plan on Artificial Intelligence. https://www.oecd.org/en/publications/progress-in-implementing-the-european-union-coordinated-plan-on-artificial-intelligence-volume-1_533c355d-en.html























