Using Generative AI

How can Generative AI help with intervention development studies?

Generative AI chatbots provide a response to a prompt that the user gives. They are capable of revising text to make it more accessible. This might be text for an intervention or participant-facing research documents

In our tests (carried out in early 2024), we have had good experiences using ChatGPT using GPT-4 (the paid version), Claude using Claude 3 Sonnet (the free version), and Microsoft Copilot. ChatGPT using GPT 3 (the free version) did not work as well.

There are other AI chatbots you may want to try. Generative AI is improving at a rapid rate so newer and better AI chatbots are likely to be made available soon.

This guide will help you to use generative AI to amend text to make it more accessible while ensuring it is still acceptable to users.


Before you start

You should start the process with a draft of your written content

Before beginning, you should think about the outcome you want. List out the format, language, tone, length, reading level, structure, and formatting you would like. This will help you be more specific and detailed in your prompt.

Step 1 – Starting a conversation

Open the chatbot, and create a login (if needed) and start a new conversation. Starting a new conversation is important as these chatbots might remember some of the context from a previous conversation, which will influence their response

Step 2 – Making the request

Paste your instruction and attach a Word file or PDF of the text you’d like it to optimise

The instruction (or ‘prompt’) you give is important in getting the right output.

I'd like you to play the role of a highly experienced public health writer. You specialise in communicating medical information in a way that can be easily understood by members of the public.

I am going to share a piece of text with you. I would like you to use your skills in comprehension and concise copywriting for public health messaging to rewrite it with the following requirements:

  1. Keep the language suitable for an adult whose reading ability is that of an 8 year old and ensure it is culturally sensitive and inclusive
  2. Use as few words and sentences as necessary
  3. Use short words and sentences (e.g. avoiding words that are 4 syllables or more)
  4. Use active verbs
  5. Keep the text in very short chunks without adding many more subheadings
  6. Use a warm, inviting, reassuring and friendly tone without being too informal
  7. Avoid scientific and medical terms (like anonymous or randomisation)
  8. Write in short sentences and paragraphs avoiding too many bullet points

It’s really important that all the detail in the shared text is included in your version. I do not want you to remove information, but rather write it in a way that is more accessible and concise.

Here is a list of common words and some possible alternatives that are more accessible:

  • Randomisation: "Choosing by chance"
  • Participation: "Taking part"
  • Confidential: "Private"
  • Anonymous: “without using your name” Or “without your personal details

This prompt gives the chatbot the desired ‘persona’ and skills needed for the task (in this example, a public health writer). It goes on to give clear instructions on what is required (e.g. the desired reading age, format, and tone). The more detailed you are in your prompt, the better the output. You may want to use this prompt as a template, amending any details or requirements as necessary.

The chatbot will generate a response to this prompt within your conversation.

Step 3 – Reviewing the output

You should evaluate the output against your planning table and your guiding principles. Does it fit with your design objectives and features of your intervention? Does it fit with the theory and evidence in your planning table? Does it include all the information you’d like it to include?

If you feel there are parts you’d like to amend, you can either amend these yourself, or you can respond with a request. For example, “The second paragraph was too long, can you make it shorter?” or “I like how you showed empathy, can you make the last paragraph more like that?”

Over time during a back-and-forth conversation the chatbot may start to forget the initial request, so sometimes it‘s better to start a new conversation with a prompt that includes all of the additional requests you have added throughout the conversation.

We strongly recommend you ask patients, members of the public or other stakeholders to review the output to make sure it’s acceptable.


  • The more specific the prompt the better the output. You can give specific reading age/levels, tell it what tone to use (for example, “use friendly language” “show empathy”), and ask for a particular structure (e.g. “use bullet points and subheadings”).
  • Never input personal information or any patient/participant data into the system as the system may retain and reuse it.
  • You don’t have to give it a persona or expertise, but this can help.
  • If the output is too brief, you can try optimising your text section by section – ChatGPT in particular works better with smaller chunks of text
  • If the output is too informal when asking for text for teenagers/children you can add this to your prompt: Even if the text suggests it should be written for children or adolescents, we would still like you to write to the level of an adult with a reading age of 8 years old.
  1. Missing information: Generative AI chatbots don’t have up-to-date information about novel topics.
  2. Potential for Incorrect or Inappropriate Information: When there is an unclear 'correct' answer, or the information is absent from the training data, there's a possibility that AI chatbots may generate inaccurate, inappropriate or outdated information. Any output should be carefully checked for accuracy.
  3. Sensitivity to Input Wording: AI chatbots are sensitive to the phrasing of input. Slight changes in how a question is asked may result in different responses. Very clear prompts/questions are crucial to getting the best output alongside a common need to correct or re-prompt the tool.
  4. Inability to Remember Context Over Long Conversations: While chatbots can maintain context within a conversation to some extent, they may struggle to remember information or references from the beginning of a long conversation. So if you give an instruction at the start of the conversation, the chatbot may have forgotten that instruction later in the conversation (unless you repeat it).
  5. Potential for Bias or Inappropriate Responses: Chatbots are trained on a vast range of data, but this data is inherently biased due to existing systematic bias. They may therefore give biased/inappropriate responses. Any AI output should be carefully checked for bias.

Bowers H, Ochieng C, Bennett SE et al. Exploring the role of ChatGPT in rapid intervention text development [version 1; peer review: 1 approved with reservations]. F1000Research 2023, 12:1395. Cite.