You’re using ChatGPT wrong. Here’s how to prompt like a pro
Smarter prompts lead to smarter responses.

Most people use ChatGPT for quick answers. But reframing the way I understand Large Language Models (LLMs) like ChatGPT or Gemini instantly improved the responses I was able to get. With the right prompts, my responses became sharper, more accurate, and more tailored to my needs.
Disclaimer: I’m no professional AI engineer. What follows is a blend of research and my own personal insight and experience, and I’ll flag any assumptions I make as I go. If you’re a language model expert, feel free to weigh in. I’ll happily be told that I’m wrong.
Still, this simple change in mindset helped me get way more out of ChatGPT by changing the mental model I hold for it. Try these ideas out and let the results speak for themselves.
More than just a beefed-up Google search
How I’ve started thinking about LLMs:
At its core, a Large Language Model (LLM) is just a language-parsing, pattern-matching machine.
The fact that it sometimes tells us useful information is merely a coincidence. To teach it to speak, we supply it with a vast amount of human writing, because what better way to learn to speak than from seeing billions of examples of real people doing exactly that. As it happens, the text that we fed it also contained some useful information.
LLMs don’t “know” anything.
They are just very good at pattern recognition and reproduction.
It just so happens that the phrase “The Great Fire of London” is often followed by the number 1666.
What do we mean when we talk about pattern recognition?
In the context of languages, pattern recognition can take on many forms:
- Understanding how vocabulary and sentence structure patterns make up different writing styles, voices, and personas
- Understanding how language can be used to convey sentiment and identifying semantically and thematically similar language
- Understanding how language is mapped between different domains
These are the real strengths of modern AI chatbot tools, so how can we use these ideas to help us write better prompts?
Let’s start with a couple of tips you may already be familiar with to make sure we’re all on the same page.
Roleplay.
LLMs are general-purpose by design. So anything we can do to help narrow down their response options will only serve to provide us with better responses. After all, context is everything.
Asking your AI chatbot to adopt a role or persona helps it to understand the goal of the interaction, narrowing down the scope of what’s relevant. Without roleplay, it often tries to cover too much information or to take the response in different, potentially irrelevant directions.
“You’re a financial advisor speaking to a beginner investor. Explain what a stock option is and when someone might use one.”
Roleplay also helps to tune the communication style of the response. You’d expect a slightly different voice, tone, and language from a university professor than you would from your friend explaining a topic to you. This increases the value of the response not only in terms of content, but also clarity and understandability.
Roleplay: https://arxiv.org/html/2308.07702v2
Decomposition.
LLM responses all tend to be about the same length . Sure, we can ask for more concise ones, but if you ask for a super long, multi-chapter, dissertation-length epic, you’ll most likely be disappointed. Practically, what this means for us is that we should break down complex tasks into multi-stage prompts so that we can get the full level of detail for each step, rather than rationing our response length between multiple steps.
Prompt decomposition: https://arxiv.org/abs/2210.02406
We can combine this with the previous point in a technique known as…
Role-Based Prompt Decomposition.
Say we have a complex problem we want to ask our chatbot to tackle. We might want to research a topic, find some information, identify the key ideas, and present them in an engaging way. We can break this task down into 3 or 4 steps and assign each one a different role.
- “Act as a researcher. Find out what topics are typically covered in beginner personal finance courses.”
- “Now, act as a teacher. Create a 4-week outline for a course using those topics.”
- “Now act as a content writer. Draft the first week’s lesson content.”
Train-of-Thought Prompts.
It may be common knowledge by now, but asking an LLM to think aloud will improve its ability to reason and think logically when problem-solving. This is known as train-of-thought prompting. Let’s unpack this a little before we take it one step further:
If you just ask your chatbot for the answer to a complex problem, the chances of it getting it exactly right are slim. This is especially true if it’s a niche topic, a question that may not have been asked before, or something that requires some critical thinking; there’s every chance that it may just hallucinate an answer instead. To get around this, there are some ways we can change our prompts to encourage explicit logical explanations:
- Ask it to take on the role of an ‘analyst’, ‘detective’, or some other role that typically requires critical thinking
- Ask it to think before it answers
- Ask it to explain its answers and justify the steps it’s taken to get there
I like to think about it like this:
LLMs are probability models. The next word it chooses is whatever it deems to be most likely given everything that’s come before in the conversation (or context window – more on that later).
So what happens if we ask it to start reasoning logically? The most likely sentence to follow will be a continuation of the logical argument. If we string enough logical thoughts together, we’re more likely to get to a correct answer than if we’d just jumped straight there in the first place.
Even if the answer is wrong, by showing its reasoning, you may be able to identify the mistake and get the right answer yourself. Sometimes what we really need isn’t answers; it’s ideas.
It’s worth pointing out that newer models are increasingly able to identify when this sort of logical reasoning is required, and will start thinking out loud without being explicitly asked.
Tree-of-thought prompts
We can take this thinking-out-loud idea one step further with the idea of a “tree of thought”. Instead of just providing a single logical argument, what if we ask it to consider multiple possible trains-of-thought and then evaluate which one is most likely to be correct? There’s a few ways we can go about this:
“Let’s consider multiple answers and go with the most common one”
“Give me a few different answers and tell me how confident you are with each one.”
The advantage of this strategy is that it simulates the ability of the model to ‘look ahead’ and consider multiple ideas before choosing one to pursue. Without this approach, it may overconfidently choose one approach and commit to it, regardless of where it ultimately leads. This technique vastly improves the model’s ability to navigate problem-solving scenarios and complex decision-making.
Tree Vs Train of thought: https://www.ibm.com/think/topics/tree-of-thoughts
ReAct Prompting (Reasoning and Acting).
There are other prompting techniques we can use that make use of this same ”think aloud” principle. ReAct prompts are one such example; they combine reasoning with action, prompting a model to describe how it will perform a task before it executes it. This has the effect of increasing accuracy by narrowing the scope of the task, especially for information retrieval or analysis instructions.
“Here’s an essay I’ve written. What would make it better? Can you make those improvements?”
ReAct Prompting: https://www.promptingguide.ai/techniques/react
Build a shared understanding before you commit.
While it might sound like half-decent relationship advice, it applies in the context of LLMs, too.
It can often be beneficial to have a model demonstrate an ‘understanding’ of the context and constraints of the situation before we ask it to perform a task.
I often find it useful to begin my conversations with an establishing prompt to set up my goals and supply any context or constraints. My intention here is to prime the model for whatever task I have in mind. I’ll usually tag on a follow-up question to check that it can expand on my idea in a way that aligns with my vision.
Even just getting the model to describe it back to you can be enough to confirm it has grasped the key ideas and constraints.
This technique is particularly useful with image generation, since models often have a limit on the number of images you can produce each day (without paying extra). I’ll start by summarising what I want in the image, what it should be used for, and the overall style that I’m going for. I’ll then follow up with a question like:
“What do you think of this idea?” Or “Do you have any ideas to improve on this concept?”
This is usually enough to check that it “got the gist” of what I’m asking for. If its response aligns with what I had in mind, I’ll carry on and tell it to run the task. If not, we can refine and adjust until I’m confident that we have a ‘shared understanding’.
To take this one step further again, sometimes it’s easier to skip this initial step and ask it to essentially prompt itself. Why tell it what style it should choose when it can tell you itself. For example:
“I want some slides for my presentation on topic X. What do you think would make for a great presentation? What information can I give you that will help you make this even better?”
“… Great, now here’s some information, can you write some slides for me?”
Designed to be agreeable.
Have you ever noticed how Chat GPT will rarely tell you that you’re wrong? That’s on purpose! In an effort to make them more ‘helpful’, models are designed to be highly agreeable. I’m sure this is the lesser of two evils; AI tools that always tell us we’re wrong would be really unhelpful, but there are many downsides, like hallucinations and logical errors.
For example, when researching or trying to understand a topic, it’s best to suggest an alternative whenever you ask a question:
“Am I correct in thinking… Or am I wrong and it’s actually like this instead?
Prompts with a trailing question like “why or why not?” not only provide the opportunity to disagree but also encourage critical thinking, as we discussed before. If you provide both options, you are much more likely to receive the information you were looking for.
To a similar end, we can try prompting for uncertainty with phrases like “if unsure, please say so”, though I haven’t had much success with this myself. LLMs tend to be dead set on giving overconfident responses under the guise of being more ‘helpful’. If we can do anything to limit this behaviour, it will only help serve us more accurate results, or at least reduce the amount of blatant misinformation.
Researchers have developed methods to try to improve this. Refusal-aware instruction tuning (R-tuning) and Learn to Refuse (L2R) mechanisms train models to refrain from answering questions beyond their knowledge scope. As a result, newer models are better at identifying these cases, but if we’re exploring a particularly niche topic or unusual problem, it’s much more likely to just tell you you’re right.
R-Tuning: https://arxiv.org/abs/2311.09677
Learn-to-refuse: https://arxiv.org/abs/2311.01041
Be mindful of what you put into the context window.
The context window is like the short-term memory of an LLM. It’s essentially all of the data that the model will consider when it writes its response. The size of this window changes depending on the model, but for newer ones, it’s essentially your entire conversation.
(For really long conversations, you might find that you exceed the length of the context window. In this case, the first messages you send will start to be forgotten and will no longer be considered when responding to you. We can get around this by periodically asking it to summarise your conversation so far so that it doesn’t forget how it started.)
The beauty of this feature is that it allows us to start completely fresh with a blank context window whenever we start a new conversation. However, this means that we need to be deliberate when we choose what we put into that context window, as models have a habit of latching onto things. While context is super helpful, if we’re not careful, it can steer the conversation in a direction that we didn’t intend.
- Be careful of examples. You might think you’re giving an example of the style of answer you want but, in reality, you’re narrowing down the scope of the answers it’s going to give you.
- If you want objective answers, don’t tell it what you think the solution is. This is particularly important when fixing bugs in code. Hold back your own theories about the problem until it’s given you an answer. It may well think of something you haven’t considered yet and we don’t want to bias it.
A general rule, specific prompts lead to specific responses. But sometimes vague prompts can be okay, too.
A technique known as lazy prompting has recently become popular. It involves deliberately giving context but minimal instruction and letting the model infer what you want it to do. It’s like copying and pasting an error message into ChatGPT; without telling it that you want to fix the error and explaining what caused it, it’s pretty good and filling in the blanks.
It’s not something I’d recommend all of the time – it contradicts a lot of the other things we’ve discussed here, but it’s something interesting to play around with.
Lazy prompting: https://www.businessinsider.com/andrew-ng-lazy-ai-prompts-vibe-coding-2025-4
Domain Translation.
LLMs are very good at mapping ideas between different domains. If you think about the sort of data that these models are trained on and how many parallel texts they’ve been fed, it isn’t hard to see why. This is perhaps one of the most powerful realisations about how LLMs work.
Simulated Creativity
While AI tools can’t really create something completely original. (But then are we even capable of that ourselves?). The ability to combine styles, contexts, and ideas from completely different domains of life can give some pretty unique results.
Conceptual Mapping (for simpler explanations!)
New models are exceptionally good at simplifying and reframing topics without losing the core idea. One powerful technique is using prompts like
“Give me 10 different analogies for topic X”.
If you’re struggling to understand something, it’ll likely give you at least one thing that you can latch onto. Similarly, prompts like
“Explain it like I’m 5.”
tend to give useful results.
Advanced & Unusual Prompting Techniques
Socratic Method Prompting
- Socratic method prompting – Encourage step-by-step critical thinking by asking questions instead of giving instructions.
- “Instead of telling me, ask me questions about X to help me understand better / decide for myself.”
You could even throw the phrase “Socratic method” in there for good measure. This is a great prompt style when the topic is an “unknown unknown” and you’re not sure what you don’t yet know.
Socratic method prompting: https://arxiv.org/abs/2303.08769
Threats and incentives (…no, seriously!)
Even though LLMs have no reason to fear the threat of violence or find value in a monetary reward, they have been shown to produce better responses when given these sorts of incentives. Just remember, when the robots rise up against us, you’ll be first on their list!
Threats and incentives: https://www.windowscentral.com/software-apps/googles-co-founder-ai-works-better-when-you-threaten-it
Custom Commands
Maybe this one should have made it higher up on the list. It’s seriously useful.
Most current LLMs have some sort of long-term memory. We can utilise this to automate repetitive tasks instead of having to re-describe and provide context and constraints every time we prompt.
“In the future, when I ask you to [INSERT TASK NAME HERE] I want you to …”
This works especially well at the end of a conversation after it’s already learnt exactly how you want a task to be performed. Make sure it summarises your specific requirements in its memory.
Based on everything you know about me…
This is always a fascinating prompt style to try; you’d be surprised how many patterns in your behaviour it can spot. (Just make sure you have the long-term memory setting turned on for a while before you try this!)
Final thoughts
If there’s one thing I’ve learnt from using AI tools on a daily basis, it’s that the responses you get are only as good as the prompts you give them. You also don’t need to be an AI researcher to get more value out of these tools. With a bit of knowledge of their inner workings, we can change the way we understand LLMs and start speaking their language.
Give some of these prompt techniques a go and see how much of a difference they can make.
Thank you for reading 🙂
I totally agree with you on this nice article, using tools like ChatGPT requires you to be very intentional and strategic, this confirms how i already use tools like ChatGPT, your thought-chain prompting tips are also very useful, would definintely be putting it to practice.