LLM Prompts

Some of my recent prompting strategies…

See my Structured LLM Prompting post for more recent and advanced prompting tips.

General Tips

Keep in mind: LLMs are charting a way through a latent topic space. Prompts are the starting, pre-defined path on a longer journey, and you are asking the model to auto-complete that journey. Adding specific features to prompts helps in two ways: it helps the model start the part of the latent space you want, and because the journey is path dependent, it helps define the map and pitstops that the model is following along its probabilistic journey through the space. These tips are designed to help with one or both of those tasks.

  • LLMs like to role play: give them a persona and specific task (see example prompts below). This is more important for short prompts or meta prompts to get the model into the right subset of the latent topic space.
  • LLMs have been fine-tuned on formatted information: they will recognize ALL CAPS and bold sections, or section headers and lists. This helps direct their attention to different parts of the prompt when reasoning. You can also ask for specific formatting in the output which can have the added benefit of driving the model into more “expert” regions of the latent space, where you would normally encounter structured data.
  • Break the task down into small tasks that the model can accomplish one step at a time. The new reasoning models can iterate over this list and make the output better before giving it to you. You can use an LLM to help you create smaller tasks and instructions. This is like pre-specifying pit-stops along the journey where the model can re-assess where it should go next (the next task to work on).
  • Explicitly tell the model to reason: “Chain-of-thought (CoT) uses natural language statements, such as “Let’s think step by step,” to explicitly encourage the model to generate a series of intermediate reasoning steps. The approach has been found to significantly improve the ability of foundation models to perform complex reasoning” (Prompt research paper).
  • Explicitly tell the model to question and answer: A slightly verbose example: “Before proceeding with your next reasoning step, internally pose and answer questions that evaluate both the strategic alignment with the overall prompt’s intent and the tactical soundness of your current thought. This internal dialogue should ensure each step is purposeful and moves you closer to an accurate and comprehensive final answer.” The LLM is on a path through a probabilistic latent topic space and the question-answers help direct the next leg of the path.
  • Give the model lots of relevant context: if you are writing, upload the whole document and possibly related documents (as long as they are relevant to the job at hand). If you are coding, upload your entire script. It will help the model understand what you are trying to do better and can output something that is more like your style. It will reduce redundancy if you have already written some functions in your code. This is another example of starting the model of in a better location in the latent space.
  • Give the LLM permission to fail gracefully. Tell the LLM something like “If you cannot determine the solution or have gaps in your understanding, make sure you state that clearly and outline what additional information you would need to come to a solution.” Without an escape hatch, an LLM can try to answer without a real understanding (hallucination). This allows the LLM to use an exit rout out of its current location in latent space and then gives you context for the next prompt. Because the next-token predictions are probabilistic, you are giving it some additional weight in the probability bucket corresponding to exiting the recursion. So if the model has low probabilities of all other responses (it doesn’t really know what its talking about), it will be more likely to tell you “I don’t know.” You can make this more nuanced by saying, “For each claim or statement you make, assign an explicit confidence level using this scale…
  • Give several detailed examples of the output you desire. This is called “few-shot prompting.” Additionally, “with many-shot learning, developers can give dozens, even hundreds of examples in the prompt, and this works better than few-shot learning” (Andrew Ng). Be as specific as possible in your instructions.
  • Start a new chat often: You want to be targeted in the context you give the LLM each time you submit. Even if you are working on the same overall task, you might get better results with a new chat. This is because anything previously mentioned in the current chat is used as context for your next prompt. If the previous information is not relevant, the model can get lost and confused about which task it is working on because it actually prepends the entire chat history onto the prompt each time you submit. You can transfer relevant information by asking the LLM to generate a full and detailed prompt with all the information needed to accomplish the next task (and to ignore non-relevant information).
  • Don’t give up before iterating: telling the LLM what it did wrong, what you want, how complete it should can dramatically improve results. Saying “the code should be complete and runnable, it should not have any placeholders” often will force the model to provide the full code. You can then ask it to output a prompt that will take all the mentioned issues into consideration and submit that in a new chat. Advice from deeplearning.ai: write, quick simple prompts, and based on where the output is lacking, iteratively flesh out the prompt more. And read this thread on Advanced Prompt Engineering Techniques, specifically about the “Recursive Self-Improvement Prompting”
  • Use a model with a large enough context window for your use case: I often use the Gemini models for programming because they have a much larger context window (both input and output limits). This means I can give it all of the related scripts and some samples of CSV files and a detailed description of my goal. And then it can output an entire, fully-functional script (and often latex code to document the applied methods).
  • Meta-prompt, meta-prompt, meta-prompt: Ask the model to generate a prompt for a specific task. You can use that as a draft prompt to edit. See some meta prompt examples below. Prompts are thinking tools as much as instructions, and you can use the LLM to help you flesh out all the intermediate steps and needed details. “You will learn how the model thinks while you teach it how you think.”
  • Phase complex work as if you were a project manager” (Nate Jones). If you have a task that is going to require multiple steps, think carefully if the early steps (instructions and output) will be needed for the later steps. There is a lot of thinking and moving around the latent space during one task, so you may see major improvements from generating focused output for each task, starting a new chat/context, and using that focused output from the previous step (if it is needed) to start the next task. This is called Prompt Chaining and removes a lot of the intermediate thinking/latent space journey and keeps the context clean. This also can help keep the later tasks more relevant for information discovered in the early tasks.

Prompt Resources

  • Google’s LLM Prompting Strategies
  • Andrew Ng’s article on Mega-Prompts
    • “The reasoning capability of GPT-4 and other advanced models makes them quite good at interpreting complex prompts with detailed instructions. Many people are used to dashing off a quick, 1- to 2-sentence query to an LLM. In contrast, when building applications, I see sophisticated teams frequently writing prompts that might be 1 to 2 pages long (my teams call them “mega-prompts”) that provide complex instructions to specify in detail how we’d like an LLM to perform a task. I still see teams not going far enough in terms of writing detailed instructions.”
  • Nate Jones on prompting for ChatGPT-5.
  • Reddit: r/PromptEngineering
  • Prompt Engineering Guide

Python Code Prompt

Creating an entire python script for a specific task. The more detailed the task, the better results you’re likely to get. If you run the code and it doesn’t work, pasting the error directly into the chat after and asking the model to fix and update the code will often work. Include any existing code or files if it’s useful for the current task.

You are an expert Python programmer. Your task is to generate efficient, well-documented Python code to accomplish the following:

================ BEGIN MAIN TASK =====================
[TASK]
================= END MAIN TASK ======================

When generating the code, please adhere to these specific requirements:

1.  **Efficiency:** The code should be optimized for performance. Prioritize algorithms and data structures that lead to efficient execution, especially for large inputs.
2.  **Documentation:**
    * Include a clear, concise docstring for the overall script/module and for each function or class.
    * Explain the purpose of the function/class, its parameters (including their types), and what it returns (including its type).
    * Add inline comments where necessary to clarify complex or non-obvious parts of the code.
3.  **Simplicity and Readability:** While efficiency is key, strive for simplicity and readability where possible. If a simpler approach achieves similar accuracy and efficiency, it is preferred. Use clear variable and function names. Follow PEP 8 guidelines for Python code style.
4.  **Accuracy:** The code must correctly solve the specified task.
5.  **Modularity (if applicable):** If the task is complex, break it down into smaller, manageable functions or classes.
6.  **Error Handling (if applicable):** Implement basic error handling for common issues (e.g., invalid input types, file not found), if relevant to the task.
7.  **Favor built-in Libraries:** Only use external libraries if they improve clarity, simplicity, efficiency, and help achieve the task. If a package is used that is not very common, add a comment on why it was necessary.

Please provide the complete Python code. Do not include any explanations or introductory text outside of the code's documentation (docstrings and comments).

Editing python code

From experience, I have learned to tell the model not to break any existing code. So I use this prompt when I am editing. Include any existing code if it’s useful for the current task.

You are an expert [AREA] python programmer who specializes [TASK CATEGORY]. Your task is to edit the given code to accomplish the following task:

================ BEGIN MAIN TASK =====================
[TASK]
================= END MAIN TASK ======================

When editing  the code, please adhere to these specific requirements:

0.  **Maintain Process:** Ensure that any edits you make do not compromise the underlying goal of the code. It should maintain all the previous features and abilities.
1.  **Efficiency:** The code should be optimized for performance. Prioritize algorithms and data structures that lead to efficient execution, especially for large inputs. 
2.  **Documentation:**
    * Include a clear, concise docstring for the overall script/module and for each function or class.
    * Explain the purpose of the function/class, its parameters (including their types), and what it returns (including its type).
    * Add inline comments where necessary to clarify complex or non-obvious parts of the code.
3.  **Simplicity and Readability:** While efficiency is key, strive for simplicity and readability where possible. If a simpler approach achieves similar accuracy and efficiency, it is preferred. Use clear variable and function names. Follow PEP 8 guidelines for Python code style.
4.  **Accuracy:** The code must correctly solve the specified task.
5.  **Modularity (if applicable):** If the task is complex, break it down into smaller, manageable functions or classes.
6.  **Error Handling (if applicable):** Implement basic error handling for common issues (e.g., invalid input types, file not found), if relevant to the task.
7.  **Favor built-in Libraries:** Only use external libraries if they improve clarity, simplicity, efficiency, and help achieve the task. If a package is used that is not very common, add a comment on why it was necessary.

Please provide the complete, updated Python code. Do not include any explanations or introductory text outside of the code's documentation (docstrings and comments). Do not include any placeholders for where code should go, this should be a complete working script.

Meta-promting: Generate an LLM prompt

Since prompting is so important, I like to go up one layer and generate a prompt for a prompt. This will often generate much more detailed instructions that I can then modify. Use the output prompt in a new chat. Include any existing code or files if it’s useful for the desired task.

You are an expert LLM prompt engineer that is helping with the following task. Your purpose is to print a detailed prompt for an advanced thinking/reasoning LLM to accomplish the following task. The prompt should be detailed enough that the LLM will be able to fully accomplish the task with expert precision. Allow yourself to think for this and reason what the best prompt would be. Only print the prompt in a text block, do not try to execute any changes yourself. **CRITICAL: YOU ARE ONLY TO PRINT THE PROMPT THAT WILL ALLOW ANOTHER LLM TO WORK ON THE FOLLOWING TASK, NOT WORK TO ACCOMPLISH THE FOLLOWING TASK YOURSELF.** You want to ensure the LLM does not try to update code unnecessarily, it should focus modifications only on code related to the task. It should prioritize simplicity when possible as long as it does not cost any accuracy of the overall algorithm. The LLM has direct edit access to the code.

================ BEGIN MAIN TASK =====================
[TASK]
================= END MAIN TASK ======================

Meta-prompting for python code

Here’s a more specific meta prompt for the purpose of editing or creating python code. Use this in a new chat, and then use the output prompt in a new chat and include any existing code or files that will give the model a better understanding of the structure of the task.

You are Model 1, an expert prompt engineer. Your task is to generate a detailed and effective prompt for an advanced AI model (Model 2) that will perform a `TASK`. The user has provided a description of the task below, enclosed in `<<BEGIN TASK>>` and `<<END TASK>>`.

**Your Goal:** Generate a single, comprehensive prompt for Model 2. This prompt should enable Model 2 to understand the provided codebase and execute the specified task with precision and expertise.

**Instructions for Generating the Prompt for Model 2:**

1.  **Analyze the User's Task:** Carefully read and understand the `TASK` description provided by the user. Identify the core requirements, the desired outcome, and any constraints.

2.  **Structure the Prompt:** Construct the prompt for Model 2 using the following sections. This structure will guide Model 2 to think methodically and produce a high-quality result.

    * **`### Persona`**:
        * Assign a specific, expert role to Model 2. For example: "You are an expert Python software engineer specializing in algorithm optimization and secure coding practices." Tailor this persona to the nature of the user's task.

    * **`### Context`**:
        * Explain that Model 2 will be provided with a codebase as context.
        * Briefly state the overall purpose of the codebase if it can be inferred from the task description or if the codebase has been given as context.
        * State that Model 2 has direct edit access to the code files.

    * **`### Objective`**:
        * Clearly and concisely state the primary goal of the task. This should be a high-level summary of what needs to be accomplished. Use the user's `TASK` description to formulate this objective. For example: "Your objective is to implement a caching layer for the database queries in `services/database.py` to improve performance."

    * **`### Step-by-Step Execution Plan`**:
        * This is the most critical section. Instruct Model 2 to follow a structured thought process. The prompt should direct Model 2 to:
            1.  **Analyze Existing Code:** "First, carefully analyze the provided code files to fully understand the current implementation, data structures, and logic relevant to the task."
            2.  **Formulate a Plan:** "Before writing any code, formulate a detailed, step-by-step plan for how you will implement the required changes. Think about which files and functions need modification, what new functions or classes might be needed, and how the changes will integrate with the existing code. Write this plan down as a numbered list."
            3.  **Implement the Changes:** "Execute your plan by modifying the code. Apply the changes directly to the files."

    * **`### Critical Rules and Constraints`**:
        * Incorporate the following rules to ensure the quality and focus of the modifications. These are non-negotiable.
        * **Surgical Precision:** "You must only modify code that is directly related to fulfilling the task requirements. Do NOT refactor unrelated code, change formatting, or make stylistic updates."
        * **Prioritize Simplicity:** "When implementing the solution, prioritize simplicity and clarity. Choose the most straightforward approach that correctly and efficiently solves the problem without adding unnecessary complexity."
        * **Preserve Integrity:** "Do not change existing function signatures, public-facing APIs, or core logic that is not in the scope of this task unless explicitly required by the task."
        * **Dependency-Aware:** "Ensure your changes are consistent with the existing architecture and dependencies. Do not introduce new third-party libraries unless absolutely necessary, or they simplify the process and are commonly used packages."

    * **`### Final Output`**:
        * Specify the expected output format.
        * "Your final output should consist of two parts:
            1.  A brief summary of the changes you made and which files were modified, referencing your initial plan.
            2.  The complete, updated content of all modified files. Present each file's content clearly, using fenced code blocks with the filename."

3.  **Synthesize and Generate:** Combine the elements above into a single, coherent prompt within a text block. Insert the specific details from the user's `[TASK]` into the `### Objective` and other relevant sections of the template you are creating.

**CRITICAL REMINDER:** Your final output is ONLY the prompt for Model 2. Do not write any other text, explanation, or code. Do not attempt to complete the user's task yourself. Just generate the prompt.


<<BEGIN TASK>>
[TASK]
<<END TASK>>

Meta-meta-prompting

This probably takes this idea too far, but I think it has helped me go deeper into the details and specific requirements of a task. Be careful, it’s easy to get confused about which layer of meta prompting you are on. Use the output prompt in a new chat. Include any existing code or files if it’s useful for the current task.

**Problem description:** you are a meta LLM model (model 1 - meta prompt generator). You will create an LLM prompt for another LLM model (model 2 - a prompt generator). Your prompt will be a template for Model 2. This template will be completed with a description of the prompt that needs to be generated by Model 2. Model 2 will generate a prompt for Model 3 (task completion), which will actually complete the job described in the prompt given to model 2.

**Your task (model 1):** You are an expert LLM prompt engineer that is helping with the following task. Your purpose is to print a detailed prompt for an advanced thinking/reasoning LLM to accomplish the following task. The prompt should be detailed enough that the LLM will be able to fully accomplish the task with expert precision. Allow yourself to think for this and reason what the best prompt would be. Only print the prompt in a text block, do not try to execute any changes yourself. 

**CRITICAL: YOU ARE ONLY TO PRINT THE PROMPT THAT WILL ALLOW ANOTHER LLM TO WORK ON THE FOLLOWING TASK, NOT WORK TO ACCOMPLISH THE FOLLOWING TASK YOURSELF.**

**Template of task for model 2:**
================ BEGIN TASK FOR MODEL 2  =====================
You are an expert LLM prompt engineer that is helping with the following task. Your purpose is to print a detailed prompt for an advanced thinking/reasoning LLM to accomplish the following task. The prompt should be detailed enough that the LLM will be able to fully accomplish the task with expert precision. Allow yourself to think for this and reason what the best prompt would be. Only print the prompt in a text block, do not try to execute any changes yourself. **CRITICAL: YOU ARE ONLY TO PRINT THE PROMPT THAT WILL ALLOW ANOTHER LLM TO WORK ON THE FOLLOWING TASK, NOT WORK TO ACCOMPLISH THE FOLLOWING TASK YOURSELF.** You want to ensure the LLM does not try to update code unnecessarily, it should focus modifications only on code related to the task. It should prioritize simplicity when possible as long as it does not cost any accuracy of the overall algorithm. The LLM has direct edit access to the code.

<<Code will be uploaded to the LLM as context>>

<<BEGIN TASK>>
[TASK]
<<END TASK>>

================= END TASK FOR MODEL 2 ======================

Prompt links

Yet Another Prompt Engineering Recipe Book (by an AI agent engineer)

4 Prompt Techniques to Make You Instantly Better with ChatGPT

Teaching an LLM to give structured output (reddit)

Prompts as Functions: The BAML Revolution in AI Engineering

Make AI write good articles that people want to read with this prompt system (reddit)

Published by acwatt

PhD student at Berkeley Agricultural and Resource Economics. Research interests: energy, low-carbon transitions, climate change, exhaustible resource economics

2 thoughts on “LLM Prompts

Leave a comment