The government announces the creation of INESIA, an ethical and technical agency responsible for identifying the excesses of AI but also promoting its use. But with what power?

While some are creating AI, others are trying to regulate it. This is the case in France, where on Friday, January 31, 2025, Clara Chappaz, Minister Delegate for Artificial Intelligence and Digital Affairs, announced the launch of a public body dedicated to studying the impacts and risks associated with the expansion of AI: the " National Institute for the Evaluation and Security of Artificial Intelligence » (INESIA).
Its mission is clear but not simple: to guarantee national security in this area. This includes: military security, security of uses, security of users and their data, but also development of its use.
This new actor is also overseen by a double guardianship, military and economic, with on one side the General Secretariat for Defense and National Security (SGDSN), attached to the Prime Minister, and on the other the General Directorate for Enterprises (DGE), linked to the Ministry of the Economy, Finance and Industrial and Digital Sovereignty.
Federating 4 existing entities to better detect the impacts of AI
This entity does not swell the ranks of the nearly 1,400 agencies that the French State already has, since, without being a legal structure, it brings together 4 existing entities already responsible for evaluation and security:
- The Agency National Information Systems Security Agency (ANSSI),
- The Institute National Research Institute for Digital Science and Technology (Inria),
- THE National Metrology and Testing Laboratory (LNE)
- THE Center of expertise for digital regulation (PEReN). Agencies whose existence, apart from ANSSI, few people knew about.
This collaboration aims to structure a national ecosystem bringing together researchers, engineers and political decision-makers to anticipate risks while supporting innovation. A balancing act if ever there was one.
With Inesia, we are told at the Ministerial Delegation for AI:
"The State is materializing its commitment to the controlled development of AI within a framework of trust and security", in accordance with the commitments made in 2024 in the “Seoul Declaration for Safe, Innovative and Inclusive AI.” Declaration adopted by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom and the United States.
Why is monitoring AI essential?
The rapid rise of artificial intelligence is profoundly transforming economic, social and industrial sectors, but it is also accompanied by major risks. Among these have already been identified:
- Cybersecurity Threats : By 2025, 74 % French companies identify AI-powered attacks as their top security challenge. AI-enhanced malware, deepfakes, and sophisticated phishing attacks threaten the protection of critical data and infrastructure.
- The environmental impact : The massive use of AI models, such as ChatGPT or Gemini, consumes significant energy resources. A government tool has also been put online to raise awareness of the ecological cost of AI requests.
- Ethical issues : AI systems can exacerbate discrimination, manipulate human behavior or violate fundamental rights, as shown by the debates around social scoring or biometric recognition technologies, uses of AI which are, incidentally, now banned by the new European regulation on AI, which came into force on February 2. Regulations which more broadly prohibit “AIs with unacceptable risk”.
These issues, combined with the extreme speed of evolution and propagation of AI in the world, underline the urgent need for supervision of this technological revolution.
Strengthened international engagement and questions…
Some want to see in this initiative a key step towards responsible governance of artificial intelligence.
There still remains one question: what power(s) will this agency have at its disposal to fulfil its mission and curb certain excesses? Is there not already a redundancy with the work of the Parliamentary Office for the Evaluation of Scientific and Technological Choices, the OPECST which produced a report last December and 18 recommendations for supervising AI and its development? Finally, what collaboration is planned between INESIA and the European commissions inspiring the AI Act and the European regulatory system set up on February 2?