Since generative AI hit the mainstream end of last year, most newsroom try to find out, how to use OpenGPT & Co to create content (cheaply), improve journalistic work, or just how to simplify or eliminate time consuming tasks.
Some of the trials went even quit badly, especially when for instance ChatGPT was used unsupervised or used as a knowledge tool, and not solely as a language tool. As the MIT puts it, Large Language Models (LLM) are notorious bullshiters, and there main task is, to sound plausible, not to be factual correct. After all, everything that the machine spits out is based on probability, what the next “best” word is to sound good, and not on deep or any understanding at all of the input it was given.
Keeping this in mind, and the absolute recommendation, that a human check is always required, the use as a language tool for journalism and their work in a newsroom can be extremely valuable in many places.
If you have experimented with ChatGPT or similar tools and gained your first experiences, here are seven concrete applications where you can begin working earnestly with these tools. Start honing your prompting skills for your daily work in the newsroom, and create some “masterprompts”, whether you have access to GPT-4 or any other Language Model, such as Bard or Claude.
- Copy editing and grammar checking: The most obvious applications are simple copy editing and grammar checking of articles. The results are absolutely usable and of high quality, even with quite simple prompts. If you add instructions, like how the writing style should be, you can refine and improve the response.
- Creating and optimising Headlines: ChatGPT is already widely used for creating and/or optimizing headlines and teasers of articles. Asking for multiple examples brings usable results, sometimes superior to human-generated versions. Additions to the prompt such as creating cliff-hangers without using questions sentences, or describing, who this article is intended for, improves the quality of the response. Giving examples of how a good headline of an articles looks like, also makes a big difference, but more to that later.
- Summarising articles: Creating summaries of articles is another popular application, which often works very well. Even if ChatGPT and LLMs in general can’t count characters nor words, terms in the prompt like “short” or “very short” work quite well.
- Newsletter creation: Using ChatGPT to create newsletters containing teasers or summaries of articles is another simple but useful application. Either the different source articles are part of the prompt (which can be cumbersome if there are lot of long articles), or the summaries resp. teasers of the articles are created beforehand and then used in the prompt. Even instructing ChatGPT to create text transitions between the topics work in some cases quite satisfactory. Again, as with all other examples, trial and error improve the results.
- Combining articles: Another time-saver is to combine different articles about a certain or connected topic into a new article. The result can offer a concise overview and summary of a series of articles for the reader. Here the border between using a LLM just as a language model or creating new journalistic content gets blurry, so every newsroom needs to decide where the red line is.
- Target group specific rewriting: This includes taking a current affairs story and rewriting it in a less complex language or in a writing style suitable for children or immigrants who are not fluent in the native language of the country.
- Social Media posts: This takes the summary and target group specific writing to another applications, writing posts for Facebook, X, etc. Since the LLM can’t count the exact number of characters, as mentioned before, some experiments are needed. If you try to give a character count and undershoot, you might get good result.
In any of these applications, the quality of the prompt determines the quality of the response. Sophisticated “prompt engineering” or “prompt design” is not trivial, and there are job ads offering $150k or more for skilled prompt engineers.
Sometimes the best prompts are longer than the response, a lot of thinking can get into designing the prompt. Not every journalist will, nor has to become a bonified prompt engineering specialist. But here are 5 starter tips to improve the quality of the responses quickly.
- Be clear and specific: The system does what you tell it to do, not what you want it to do. Ambiguity can lead to unexpected or undesired responses.
- System prompt: The perhaps most important part of the prompts is the system prompt. It describes the role a model should assume for the task ahead and can influence the result dramatically. For instance, starting a prompt with “you are a journalist”, followed by asking it to “explain what the sun is”, will have a very different result than “you are a philosopher” with the same task.
- Reduce hallucinations: State in the prompt that the model should only use the data provided in the prompt as a source for the task. It doesn’t eliminate the risk of hallucinations completely but reduces it.
- Provide examples: Providing ChatGPT examples of how you want the result to look like is also very powerful. For instance, if you ask for headline suggestion for an article, provide examples of other headlines that are written in the style you want to have the result.
- Refine prompts: ChatGPT is a conversational tool, so refine the prompts as you go. It remembers the conversation and learns from past prompts and responses. So keep asking, experiment with different phrasings and structures.
These applications and tips are only the first steps into the rabbit hole of using generative AI in a journalistic context. As the systems continuously evolve, so does the science and art of prompt design. The key factors for every journalist are constant experimenting and learning by doing. Most importantly, ChatGPT and Co. can help the work of journalists by automating and streamlining tasks, but it cannot replace human intuition and creativity.