In its document of 8 May 2026, Living Guidelines on the Responsible Use of Generative AI in Research, the European Commission presented practical guidelines on the responsible use of generative artificial intelligence in scholarly work.
The report does not prohibit the use of tools such as ChatGPT, Copilot, or other generative AI systems; rather, it acknowledges that they can provide genuine support to researchers in their everyday work. At the same time, it clearly states that AI must not replace the researcher’s responsibility, methodological rigour, or critical thinking. The report’s main message can be summed up briefly: generative AI can be a valuable assistant to researchers, but it should not serve as an author, reviewer, or independent decision-maker in the research process.
In practice, this means that every researcher using AI should keep several fundamental principles in mind:
- AI can provide support, but responsibility always rests with humans
Generative AI can support researchers in editing texts, summarising materials, organising arguments, generating ideas, working with code, or conducting preliminary data analysis.
However, it does not assume responsibility for results, interpretations, citations, or the final content of a publication. All AI-generated outputs should be verified by a human.
Key takeaway: AI is a supporting tool, but scientific decisions and responsibility remain with the researcher.
- AI cannot be the author of a scientific publication
An AI system should not be listed as an author or co-author of a scientific article. Authorship implies responsibility for the content, the ability to defend the results, explain the methodology, and remain accountable to the scientific community.
AI does not have such capabilities and cannot be held responsible for errors.
Key takeaway: AI can help prepare a text, but it cannot be treated as a co-author.
- The use of AI should be disclosed if it had a significant impact on the research
Not every use of AI requires a detailed description. Simple language or stylistic editing may be treated as standard editorial assistance.
However, the use of AI should be disclosed if the tool supported data analysis, interpretation of results, literature review, formulation of hypotheses, methodology design, code development, or the preparation of conclusions.
In such cases, it is advisable to provide the name of the tool, its version, the purpose of its use, and the stage of the work at which it was applied.
Key takeaway: If AI had a real impact on the research, its use should be clearly described.
- AI-generated outputs must be checked
AI can produce convincing texts, but it can also make mistakes. It may generate false information, non-existent publications, inaccurate summaries, oversimplifications, or biased interpretations.
Researchers should therefore verify citations, sources, summaries, code, data, and conclusions generated by AI.
Key takeaway: AI can accelerate work, but it does not remove the obligation to critically verify its outputs.
- Confidential, personal, or unpublished data should not be entered into AI tools
Personal, medical, sensitive data, unpublished results, manuscripts, reviews, grant documents, or confidential materials should not be entered into public AI tools.
Some tools may store data or use it for further model training. This may violate confidentiality, GDPR, copyright, or institutional policies.
Key takeaway: Data that cannot be safely made public should not be entered into AI tools.
- AI should not replace reviewers or experts
AI should not independently evaluate scientific articles, grant proposals, researchers’ achievements, projects, or the quality of research.
Such use may breach confidentiality, lead to inaccurate assessments, reproduce biases, and affect researchers’ careers in a non-transparent way.
Key takeaway: Peer review and scientific evaluation require expert responsibility, which AI cannot provide.
- Research institutions should establish clear rules for the use of AI
Universities, institutes, and research organisations should support researchers in the responsible use of AI.
They should develop AI use policies, provide training, identify safe tools, protect data, update ethical procedures, and promote transparency.
Key takeaway: Institutions should help researchers use AI safely, consciously, and in accordance with established principles.
- Funders must also clearly define rules on AI
Research funding organisations should explain whether and how applicants may use AI. They should also specify when the use of AI must be disclosed.
Funders must protect the confidentiality of proposals and should not use AI for the automated assessment of the quality of scientific projects.
Key takeaway: AI can support grant-related processes, but it should not determine the value of research.
- Hidden prompts are a new risk in research
Hidden prompts are invisible instructions that may influence the operation of an AI system analysing a document. They may be embedded, for example, in metadata, comments, invisible text, or attachments.
Their purpose may be to manipulate the assessment of a document, such as a grant proposal or publication.
Key takeaway: Institutions should safeguard their procedures against hidden attempts to influence AI systems.
- Responsible use of AI also includes environmental considerations
Generative AI requires substantial technological resources, energy, and infrastructure. It should therefore be used reasonably and proportionately to the task.
Not every task requires an advanced AI model. Sometimes, a simpler tool, a local model, conventional software, a scientific search engine, or expert analysis is sufficient.
AI is becoming an important tool supporting research, but its use requires adherence to specific principles. Artificial intelligence cannot assume responsibility for generated content, decisions, or reviews, nor can it become a co-author of a work. The responsible use of language models requires transparency, verification, and care for the protection of personal data.
