Prompt Engineering is an emergent ability of Large Language Models (LLMs), that allows, from a user’s standpoint, crafting, priming, refining, or probing (series of) prompts within the bounded scope of a single conversation. A prompt can be as simple as a few words or as complex as an entire paragraph, and it serves as the starting point for an AI model to generate a response.
Prompt Engineering skills help to better understand the capabilities and limitations of large language models (LLMs). Researchers use prompt engineering to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning. Developers use prompt engineering to design robust and effective prompting techniques that interface with LLMs and other tools.
Prompt Engineering is not just about designing and developing prompts. It encompasses a wide range of skills and techniques that are useful for interacting and developing with LLMs. It’s an important skill to interface, build with, and understand capabilities of LLMs. You can use prompt engineering to improve safety of LLMs and build new capabilities like augmenting LLMs with domain knowledge and external tools.