BMJ’s Policy on AI in Scientific Publishing: Transparency and Responsibility

the health strategist
institute for strategic health transformation 
& digital technology

Joaquim Cardoso MSc.


Chief Research and Strategy Officer (CRSO),
Chief Editor and Senior Advisor

September 25, 2023

One page summary

What is the message?

The use of Artificial Intelligence (AI) in content creation has significant potential benefits, but also raises concerns about accuracy, accountability, and quality.

The BMJ (British Medical Journal) has developed a policy to ensure transparency and responsible use of AI in scientific publishing.

This policy allows BMJ journals to consider content generated using AI as long as it is declared and described clearly, but also reserves the right to decline content without proper disclosure.

Imagem de vecstock no Freepik

Key Takeaways:

1. The Power of AI:

AI, particularly Large Language Models (LLMs) like ChatGPT, has the ability to rival human capabilities in knowledge, speed, and content generation. It can rapidly produce text, code, and media and answer complex questions, presenting significant potential in various industries, including academic publishing.

2. Potential Benefits and Concerns:

While AI offers opportunities like overcoming language barriers, there are concerns about the accuracy and reliability of content produced by AI. Questions arise regarding AI’s accountability, content quality, and the risk of bias, misconduct, and misinformation.

3. BMJ’s Transparency Policy:

BMJ’s policy on AI use in content creation prioritizes transparency. Authors are required to declare their use of AI in detail so that editors, reviewers, and readers can assess its suitability. Failure to declare AI use may result in content rejection or retraction.

4. Authorship and Accountability:

AI cannot be considered an “author” according to BMJ, ICMJE, or COPE criteria, as it cannot be accountable for submitted work. The responsibility for content, whether AI was used or not, lies with the guarantor or lead author.

5. Industry Alignment:

BMJ’s policy aligns with organizations like WAME and COPE, ensuring uniform standards for AI-generated content, whether produced externally or internally.

6. Future Considerations:

As AI continues to evolve, BMJ acknowledges the need to adapt its policy. AI can be a whirlwind of change, and journals and publishers must work with it to harness its opportunities while mitigating potential risks.

7. AI’s Role in Publishing:

AI can enhance content quality, improve language, enhance content tagging, and curate or recommend content. It can be a transformative force in the publishing industry, offering both challenges and revolutionary opportunities.

In conclusion:

AI’s growing role in scientific publishing necessitates clear policies to ensure transparency and responsible use. BMJ’s approach focuses on transparency, adaptability, and accountability, recognizing AI’s potential to transform the industry while addressing its challenges. As AI continues to evolve, BMJ remains committed to reviewing and adapting its policy to navigate this whirlwind of change effectively.

DEEP DIVE

Riding the whirlwind: BMJ’s policy on artificial intelligence in scientific publishing [excerpt]

The BMG

Helen Macdonald and Kamran Abbasi

September 8, 2023

Artificial intelligence (AI) can rival human knowledge, accuracy, speed, and choices when carrying out tasks. The latest generative AI tools are trained on large quantities of data and use machine learning techniques such as logical reasoning, knowledge representation, planning, and natural language processing. They can produce text, code, and other media such as graphics, images, audio, or video. Large language models (LLMs), which are a form of AI, are able to search, extract, generate, summarise, translate, and rewrite text or code rapidly. They can answer complex questions (called prompts) at search engine speeds that the human mind cannot match.

AI is transforming our world, and we are not yet fully able to comprehend or harness its power. It is a whirlwind sweeping up all before it. Availability of LLMs such as ChatGPT, and growing awareness of their capabilities, is challenging many industries, including academic publishing. The potential benefits for content creation are clear, such as the opportunity to overcome language barriers. However, there is also potential for harm: text produced by LLMs may be inaccurate, and references can be unreliable. Questions remain about the degree to which AI can be accountable and responsible for content, the originality and quality of content that is produced, and the potential for bias, misconduct, and misinformation.

Ensuring transparency

BMJ group’s policy on the use of AI in producing and disseminating content recognises the potential for both benefit and harm and aims primarily for transparency. The policy allows editors to judge the suitability of authors’ use of AI within an overarching governance framework (https://authors.bmj.com/policies/ai-use). BMJ journals will consider content prepared using AI as long as use of the technology is declared and described in detail so that editors, reviewers, and readers can assess suitability and reasonableness. Where use of AI is not declared, we reserve the right to decline to publish submitted content or retract content.

With greater experience and understanding of AI, BMJ may specify circumstances in which particular uses are or are not appropriate. We appreciate that nothing stands still for long with AI; editing tasks enabled by AI embedded in word processing programmes or their extensions to improve language, grammar, and translation will become commonplace and are more likely to be acceptable than use of AI to complete tasks linked to authorship criteria.1 These tasks include contributing to the conception and design of the proposed content; acquisition, analysis, or interpretation of data; and drafting or critically reviewing the work.

BMJ’s policy requires authors to declare all use of AI in the contributorship statement. AI cannot be an “author” as defined by BMJ, the International Committee of Medical Journal Editors (ICMJE), or the Committee on Publication Ethics (COPE) criteria, because it cannot be accountable for submitted work.1 The guarantor or lead author remains responsible and accountable for content, whether or not AI was used.

BMJ’s policy mirrors that of organisations such as the World Association of Medical Editors (WAME),2 COPE,3 and other publishers. All content will be held to the same standard, whether produced by external authors or by editors and staff linked to BMJ. Our policy on the use of AI for drafting peer review comments and any other advisory material is similar. All use must be declared, and editors will judge the appropriateness of that use. Importantly, reviewers may not enter unpublished manuscripts or information about them into publicly available AI tools.

It is imperative for journals and publishers to work with AI, learn from and evaluate new initiatives in a meaningful but pragmatic way, and devise or endorse policies for the use of AI in the publication process. UK’s Science Technology and Medicine Integrity Hub (a membership organisation for the publishing industry which aims to advance trust in research)4 outlined three main areas that could be improved by AI: supporting specific services, such as screening for substandard content, improving language, or translating or summarising content for diverse audiences; searching for and categorising content to enhance content tagging or labelling and the production of metadata; and improving user experience and dissemination through curating or recommending content.

BMJ will carefully assess the effect of AI on its broader business and will publicly report use where appropriate. New ideas for trialling AI within BMJ’s publishing workflows will be assessed on an individual basis, and we will consider factors such as efficiency, transparency and accountability, quality and integrity, privacy and security, fairness, and sustainability.

AI presents publishers with serious and potentially existential challenges, but the opportunities are also revolutionary. Journals and publishers must maximise these opportunities while limiting harms. We will continue to review our policy given the rapid and unpredictable evolution of AI technologies. AI is a whirlwind capable of destroying everything in its path. It can’t be tamed, but our best hope is to learn how to ride the whirlwind and direct the storm.

References

This is an excerpt version of the original publication.

Authors and Affiliations

Helen Macdonald, publication ethics and content integrity editor,  

Kamran Abbasi, editor in chief, The BMJ

BMJ, London, UK

Originally published at https://www.bmj.com

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share