HORIZON EUROPE┋Advisory support and network to counter disinformation and foreign information manipulation and interference (FIMI)
HORIZON-CL2-2025-01-DEMOCRACY-01

Expected Outcome:
Projects should contribute to all of the following expected outcomes:
- EU institutions and national decision-makers, practitioners in relevant sectors, civil society organisations and other societal actors have an increased understanding of the validity of theoretical models, the possibilities to implement recommendations, toolkits, methodologies and other solutions to prevent and counter FIMI and related disinformation actions.
- Practitioners in relevant sectors, civil society organisations and other societal actors involved in the design and implementation of measures to prevent and counter FIMI, including disinformation in different sectors have access to a network and tailor-made advisory support.
- EU institutions and national decision-makers are equipped with science-based tools and evidence-based policy recommendations to proactively conceive, implement, and innovate measures to prevent and counter FIMI and related disinformation actions, and other actions instigated by third countries.
In addition, projects should contribute to at least one of the following expected outcomes:
- Frameworks and approaches that advance common understanding and facilitate collaboration to address and counter disinformation and FIMI, such as D-RAIL[1] or the DISARM framework[2], are enhanced, improved or complemented, to foster their adoption by a wider audience of professionals.
- Practitioners in relevant sectors (such as education, security, defence, transport, foreign relations, ICT, media, etc.), civil society organisations and other societal actors have better knowledge and increased awareness of the challenges posed by disinformation and FIMI and of their pervasiveness in their respective sectors.
- EU institutions and national decision-makers, practitioners in relevant sectors, civil society organisations and other societal actors have evidence of the ways of working and impact of new technologies (AI, Big Data, etc.) in the creation and dissemination of disinformation content and FIMI activities and have new tools and methods to design and implement appropriate initiatives to address these phenomena.
Citizens, civil society organisations and other societal actors have increased capacities to identify and counter disinformation content and FIMI and related disinformation actions.
Scope:
Hybrid threats, and more specifically the phenomena of disinformation and FIMI[3] are a growing danger for democracy, human rights, social cohesion, and European security. In recent years, the EU has developed and started to implement several strategies and numerous projects to counter disinformation and FIMI.
The aim of this action is to bring to society the benefits from previously EU-funded research (including SSH research) dealing with disinformation and FIMI in the field of democracy and governance be it in Horizon 2020 and Horizon Europe or other relevant programmes (such as Citizens, Equality, Rights and Values, Digital Europe, and Global Europe). For this, proposals should build on the rich stock of actionable recommendations, knowledge, toolkits, educational material, and scientific methods etc. developed in particular by the several Horizon 2020 and Horizon Europe projects on disinformation and FIMI and make them accessible to a wider audience (i.e., professionals in various sectors, including media, education, security, defence, transport, foreign relations, ICT, etc.).
Several projects[4] funded under Horizon 2020 have aimed to conceive and implement solutions that help professionals spot and debunk mis- and disinformation and information manipulation or address hybrid threats. Proposals should indicate which Horizon 2020 projects are considered sources of research results relevant to the activities to be carried out and are encouraged to seek collaboration with these research teams. Recent projects funded under Horizon Europe investigate specifically the FIMI phenomenon. Proposals should build on, and seek cooperation with, past and on-going EU-funded projects,[5] as well as EU-led initiatives, such as the One-Stop-Shop for Tackling R&I Foreign Interference. Proposals should indicate which additional Horizon Europe projects they would build on, should there be more than those funded under the mentioned topics.
Proposals should further develop frameworks already in use by FIMI and disinformation practitioners (such as the DISARM Framework). Proposals should also consider the work done by the EDMO Hubs[6] and find ways to integrate these results into the advisory support and design actions to disseminate Hubs.
The capacity-building activities and advisory support should be addressed to a wide range of stakeholders and potential end-users, including non-scientific and non-academic actors, such as public bodies, NGOs, fact-checkers, civil society organisations, policymakers, educational bodies, law practitioners, or other potential end-users of the research results. The involvement of one or more of these categories of stakeholders is required to test and take up the research results and to explore their readiness to be implemented and replicated. Those activities and support could also involve signatories of the Code of Conduct on Disinformation, media companies, public and private broadcasters, online news platforms, and digital services object of the European Media Freedom Act (EMFA), and other private entities, such as providers of intermediaries’ services under the Digital Services Act (DSA). The involvement of these categories of stakeholders is required to provide researchers with access to data necessary to undertake research and access to platform data on the spread and behaviour of disinformation online.
With the emergence of new technologies (especially those based on [generative] Artificial Intelligence and the use of Big Data), the actors promoting disinformation and FIMI activities have significantly increased their capacity to act, they are able to develop more targeted content across a broader spectrum of sectors, and they are more effective than disinformation approaches based for instance on bot farms[7]. AI could also be used to develop new disinformation detection technologies, while addressing the ethical and legal challenges implied.
Proposals should identify gaps in research, in particular with regards to access to data, as well as other obstacles to large-scale scientific inquiry of disinformation and FIMI threats. They should identify challenges and opportunities based on an analysis of ongoing and past research and innovation projects, particularly those offered by generative Artificial Intelligence in the context of generation, dissemination, detection and debunking of disinformation and FIMI activities more broadly. The proposals should make concrete recommendations on how the gaps in research could be filled.
Proposals are encouraged to also address the issue of identity-based disinformation and FIMI targeting LGBTIQ people.
Where applicable, proposals should leverage the data and services available through European Research Infrastructures federated under the European Open Science Cloud, as well as data from relevant Data Spaces. Particular efforts should be made to ensure that the data produced in the context of this topic is FAIR (Findable, Accessible, Interoperable and Re-usable).
[1] https://www.disinfo.eu/publications/directing-responses-against-illicit-influence-operations-d-rail
[2] https://www.disarm.foundation/framework
[4] Such as EU-HYBNET (https://cordis.europa.eu/project/id/883054).
[5] In particular, projects funded under HORIZON-CL2-2023-DEMOCRACY-01-01: Detecting, analysing and countering foreign information manipulation and interference; HORIZON-CL2-2023-DEMOCRACY-01-02: Developing a better understanding of information suppression by state authorities as an example of foreign information manipulation and interference; HORIZON-CL3-2021-FCT-01-03: Disinformation and fake news are combated and trust in the digital world is raised.