Policy on the use of Artificial Intelligence (AI)

 

Language Value has recently generated a specific policy on the use of Artificial Intelligence (AI) in manuscripts. This policy is based on the principles of honesty, scientific rigour, and transparency of scientific activity. Our purpose is to provide guidance both to authors in their writing process and to reviewers in their evaluation work.

 

1. Manuscript authorship and artificial intelligence

In line with the principles that govern writing and publication in scientific journals, Language Value guarantees respect for originality, honest research, and the advancement of the discipline. In this context, the collection of bibliographic sources and linguistic correction are recognized as legitimate uses of AI. However, AI alone does not add value to scientific knowledge since this can only be generated by the authors.

This position is based on instructions issued by the U.S. AI committee, which has determined that AI-generated content is not subject to copyright since it is produced from linguistic models trained on countless uncited sources under the principle of fair use of information. Consequently, authorship implies both responsibility for the content and contractual guarantees on the integrity of the work, aspects that are uniquely human.

Therefore, under the principle of intellectual honesty, we ask the authors to do the following:

  • Make explicit the use of generative AI and AI-assisted tools in their manuscript. This should be indicated in a note to the editor and in the methodology section of the article, specifying in which parts of the text these tools have been applied.
  • Use AI responsibly, respecting our editorial policies on authorship and publication ethics. It is essential that authors rigorously review and edit their manuscripts, as AI can generate content that, although it may appear accurate, can be incorrect, incomplete, or biased. Final responsibility for the originality, validity, and integrity of the work rests solely on the authors’ end.

It is considered a misuse of AI:

  • The use of these tools to analyse and extract ideas from data in a research context.
  • The generation of texts without critical evaluation and in-depth analysis by the author.

If during the evaluation or editorial process, a non-legitimate use of AI is detected, the article will be withdrawn from the publication process.

 

2. Use of images

Language Value does not support the use of generative or assisted AI tools for the creation or modification of images in submitted manuscripts, especially when these are part of the analysis, as they could distort the research data.

 

3. Peer review and artificial intelligence

Blind peer review is an essential pillar of academic publishing, as its evaluations and recommendations assist editors in decision-making and ensure the validity, rigour, and integrity of published research.

This process is based on mutual trust between all parties, and confidentiality is a fundamental principle. Therefore, we ask that those who review our manuscripts do not upload them to generative AI tools, as the content processed by these platforms can be reused and made accessible to other users.