AI in editorial management: real risks, useful applications, and decisions it shouldn't make for you

Enthusiasm, fear, or caution? Between promises of automation and ethical dilemmas, artificial intelligence is reshaping how science is filtered, reviewed, and disseminated. This article guides you—without sensationalism or alarmism—through the current uses of AI in scientific journal , the boundaries that shouldn't be crossed, and the criteria for deciding whether it's worthwhile to implement it in your editorial workflow.

1. A manuscript detected at 03:14

Hypothetical scene designed to illustrate semantic detection using AI.

Ana, who was reviewing recent submissions from home that evening, received an automatic alert: a newly uploaded manuscript showed a 78% semantic similarity to an uncited Chinese preprint. There was no textual plagiarism, but the conceptual analysis revealed a worrying match. Thanks to this early detection, Ana initiated an additional review and prevented the publication of a duplicate article.

1.1 From spell checker to semantic detection

  • 2000s : Tools like Grammarly correct grammar and style.
  • 2010s : Turnitin and iThenticate detect exact text matches.
  • 2020s : Language models like GPT analyze the deep meaning of content, detect conceptual equivalences, and summarize complex ideas.

1.2 Why now? The context that makes it inevitable

Just five years ago, discussing artificial intelligence in a scientific journal sounded like something out of a Silicon Valley lab or editorial futurism. Today, the reality is quite different: language models are no longer just promises of innovation, but operational tools in universities, evaluation agencies, and platforms like Scopus and Crossref. The question has shifted from whether we will use AI to how we will use it—and under what conditions. journal that anticipate this change, regulate it, and incorporate it ethically not only save time but also strengthen their editorial reputation with increasingly demanding authors, reviewers, and readers.

1.3 AI as a mirror: what it reveals about our processes

One of the most valuable side effects of introducing AI into a scientific journal is that it forces a more structured approach to what was previously done intuitively or haphazardly. To train a model or automate a task, it must first be explained, documented, and understood. This exercise in technical introspection often reveals bottlenecks, redundancies, or decisions that depended on a single person. Therefore, even if the final decision is not to automate anything, simply evaluating the feasibility of AI already provides operational clarity. It's an opportunity to thoughtfully redesign processes that, for years, have been sustained by inertia or goodwill.

2. Process map where AI already adds value

Phase

Task

AI today

Benefit

Intake

Thematic classification

BERT / GPT

Desk reject 2× faster

Healing

Metadata check

NLP rules + LLM

Clean XML without human intervention

Revision

Reviewer's assistant

Extractive QA

Detects missing key data

Editorial Workflow

DOCX→XML Conversion

Sequence models

Reduce errors in JATS labels

Diffusion

Graphic summaries

Vision-Language

Altmetric ↑ 25%

3. Illustrative cases

3.1 Journal of Public Health

It implemented AI-based automatic classification (third-party external services) in 2024:

  • Reviewer assignment decreased from 9 to 3 days.
  • Editorial disagreements did not increase.

3.2 Andean Geology Bulletin

Use a GPT-4-fine-tuned model to generate visual summaries:

  • Altmetric Attention went from 17 to an average of 46.
  • Shares on Twitter/X grew 2.3×.

4. Risks and ethical dilemmas

  1. Hallucinations: AI can “invent” figures.
    Mitigation: cross-validation with Crossref and PubMed API.
  2. Training bias: Anglocentric models undervalue regional topics.
    Mitigation: Use multilingual corpora and adjust with local articles.
  3. Privacy: Confidential manuscripts sent to external APIs. Mitigation: On-premise AI or DPA agreements.

5. Regulatory framework and guidelines 2025

Body

Requirement

Validity

COPE

Declare AI in Methods

2023

SciELO

AI public policy

2025

EU AI Act

Risk classification

2026 (expected)

6. Responsible implementation in 4 phases

  1. Process map : identifies repetitive tasks that can be automated.
  2. Limited pilot : choose a module (e.g., image detection).
  3. Multidimensional evaluation : time saved, error rate, and user perception.
  4. Living policy : a document that you update every six months with new safeguards.

Self-assessment checklist

  • Does AI access full manuscripts outside of journal servers?
  • Is there an appeals mechanism for authors?
  • Are prompt and response logs reviewed?

7. Essential FAQ

Can AI decide the final acceptance of an article?

No. Artificial intelligence can assist in technical or pre-evaluation tasks, but the editorial decision must remain human.

Do I need in-house programmers to implement AI solutions?

Not necessarily. There are platforms—like Index—that already integrate ready-to-use AI-based features and can offer personalized consulting based on your resources and objectives.

Can AI make serious mistakes or "hallucinate" information?

Yes. Language models can generate incorrect data or unverified claims if used without oversight. That's why it's crucial to always review their results and combine their use with validated sources like Crossref or PubMed.

What types of tasks are safe to delegate to AI without ethical risks?

Mechanical or repetitive tasks such as subject classification, detection of incomplete metadata, basic structure validation, format conversion, or generation of graphical summaries. It should never replace scientific judgment.

Are there international policies that regulate the use of AI in scientific journal ?

Yes. COPE (2023) requires the disclosure of AI use in research methods. SciELO is preparing a specific policy for 2025, and the European Union will implement the AI ​​Act in 2026. They all agree on key principles: transparency, human oversight, and data protection.

What if an author does not want their manuscript processed by AI?

Some journal allow authors to indicate this in their cover letter. In all cases, it is advisable to clearly state which processes use AI, for what purpose, and how confidentiality is protected.

How does this affect the editor's work?

Used properly, AI doesn't replace the editor: it gives them back their time. It automates operational tasks so the editor can focus on what no machine can do well: evaluating scientific quality, contextualizing findings, and making strategic decisions.

8. Practical conclusion

Artificial intelligence does not replace editorial judgment: it is a tool that, when properly applied, frees up time and energy for what remains irreplaceable—the critical, contextual, and human evaluation of scientific knowledge.

Want to design an editorial AI pilot that respects ethics and your budget? Schedule a 25-minute exploratory call with our team and get a free feasibility report.