Prompt engineering is shaping
Prompt engineering is an emerging practice of creating the most effective prompts to get valuable responses from AI tools like ChatGPT or GitHub Copilot.
Most people will ask ChatGPT a simple, single question and may be underwhelmed by the quality of the response. But, people are finding that there is wide variability in the depth and quality of responses based on the structure and quality of the prompt.
While this practice will continuously shift and evolve alongside each new AI model or version, the early emerging techniques are noticeably more effective and even a little surprising.
Better prompts provide a lot of detail to set context, considerations, constraints, expected format, and more.
A successful prompt might include the following:
- Tell the AI their role, who they are, and/or what perspective they should adopt
- The desired outcome, goal, or solution
- Constraints and details
- Request the AI to ask questions before responding, and ask if the prompt makes sense
- Define the expected output, such as "step-by-step instructions"
An example of a higher quality prompt is:
You are an experienced game developer with expertise in the Godot 4 game engine. You are creating a new racing game with realistic vehicle physics and handling, including independent suspension, manual shifting, tire condition, and surfaces with different friction materials. While there will be different car models with different engine parameters, the player will not be able to tune the vehicle. Your task is to create the core vehicle game object. The vehicle is a central focus of the game, so the implementation should be robust. First, provide step-by-step instructions of how you will approach this task. Once the instructions are finalized, you will next provide the corresponding code in GDScript. Please ask any questions you may have before generating a response. Does that sound alright?
Note the frame-setting and depth of the prompt, and how it sets up the AI as a partner in the process. I've found requesting the AI to ask clarifying questions is particularly effective, as it will do a fair bit of discovery to further understand the job to be done and constrain the response.
Essentially, this is a form of shaping, the practice of defining and de-risking potential solutions.
- Problem — The raw idea, a use case, or something we’ve seen that motivates us to work on this
- Appetite — How much time we want to spend and how that constrains the solution
- Solution — The core elements we came up with, presented in a form that’s easy for people to immediately understand
- Rabbit holes — Details about the solution worth calling out to avoid problems
- No-gos — Anything specifically excluded from the concept: functionality or use cases we intentionally aren’t covering to fit the appetite or make the problem tractable
Effective prompt engineering (and answering questions from the AI) should cover all the areas of a defined pitch. It's important to note that the solution in a pitch is high-level but well defined, and the AI is tasked with carrying out the specific implementation.
Sometimes, it's unclear what potential solutions may even exist, especially when learning a new skill or topic. In this case, it's valuable to engage the AI as a partner in the shaping process itself, by crafting a prompt that establishes the problem space, the desired outcome, what in-scope, out-of-scope, and so forth.
As I started learning about prompt engineering, I was surprised that the biggest improvements seem to come from humanizing the AI, such as telling them who they are, requesting that they ask for clarification, or asking if they understand.
With decades of figuring out how to write cold, unnatural prompts for search engines, it's new but more intuitive to be able to write prompts that are closer to having a normal conversation. And, there will be more alignment with the shaping work I do with my clients and the prompt engineering I do with AI, which will push my shaping skills further.
The main quality of effective prompt engineering is engaging AI as a pairing partner or mentor in the process. By treating AI as a collaborator, not as something we merely delegate to, it produces more value.
If an artificial intelligence benefits from this treatment, then surely we can do better to engage individual contributors as equals with mutual respect and involvement in shaping our work.