Three Things to Know About Prompting LLMs
These research-backed tips can help you improve your prompting strategies for better results from large language models.
Topics
Radar
There seems to be no escaping large language models (LLMs) nowadays, and you certainly won’t find reprieve in this publication. But if you’re having trouble getting LLMs to give you the responses you’re looking for, you might want to consider changing how you prompt them. The way that a prompt is structured has a significant impact on the quality of the response provided. Here are three research-based tips to help you improve your prompting strategies and get more out of LLMs.
1. Be polite. LLMs are just software, so it’s surprising that the tone of your prompt should have any bearing on their output. But researchers at Waseda University and the Riken research institution in Japan found that across various languages, LLMs’ performance on various tasks generally improves as the politeness of the prompt increases (though only up to a point). Being rude to an LLM tends to yield poor results. The researchers suggest that this is likely because the corpus of data LLMs are trained on shows humans responding better when we are polite with one another.
2. Give it context. Researchers at the University of Maryland found that you can reduce the extent to which LLMs hallucinate, or fabricate information, in response to queries by providing context in your prompt. An LLM that was asked to provide a list of academic publications by an author generally generated more accurate results when provided with that author’s CV in the prompt than otherwise. It’s worth noting, however, that even when provided with this context, LLMs were still likely to hallucinate some of the time.
3. Assign a role. In his book Co-Intelligence, Wharton professor Ethan Mollick writes that prompting the LLM to first assume the role of a subject-matter expert yields better and more specific results on a task than if you were to simply prompt it with the task. For example, before asking an LLM to generate taglines for a new product, try prefacing the request with “You are an expert in marketing.”
As LLMs continue to evolve and researchers spend more time working with them, we will undoubtedly discover more and better ways to use them. The tips above should remind us, however, that LLMs are fickle tools and we should be careful about how we use them.