A new technique for prompt engineering recognizes the whole for the parts and the parts for the whole.
getty
In today’s column, I examine a new technique in prompt engineering that at first glance seems somewhat lofty, but upon closer inspection, you’ll see that the approach can be advantageous and extremely practical in many day-to-day circumstances. Bottom line is that AI can oftentimes generate more full-bodied answers via this technique. The name for this prompting method is a mouthful, known as the hermeneutic prompt, yet it is very easy to invoke. AI does all the heavy lifting for you.
I provide you with a template for hermeneutic prompting that you can readily use whenever desired. I will also explain the meaning of hermeneutics. In brief, the underlying precept is that the traditional approach underlying generative AI or large language models (LLMs) is to parse prompts in a singular forward-motion pass. By invoking the hermeneutic prompt, you are prodding AI to proceed in a recursive cyclical fashion to fully engage in the question or problem at hand.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Prompt Engineering Essentials
Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Seasoned prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI and large language models (LLMs).
A vital consideration in prompt engineering entails the wording of prompts.
Capable prompt engineers realize that you must word your prompts mindfully to ensure that the LLM gets the drift of what you are asking the AI to do. Sometimes, just an added word or two can radically change what the AI interprets your question or instruction to consist of. Generative AI can be hypersensitive to what you say in your prompts. It is often a touch-and-go proposition.
Plus, there is a potential cost involved. Namely, if you are paying to use an LLM, you’ll be getting an off-target response if your prompt isn’t on-target to your needs, for which you are paying, regardless of whether the LLM grasped your intention or not. As the old saying goes, all sales are final. The same goes for misinterpreted prompts.
Casual users sometimes catch onto this prompt-writing consideration after a considerable amount of muddling around, involving exasperating trial and error. Many users don’t ever become especially proficient in writing prompts. They just enter whatever comes into their minds. That’s probably okay if you are a casual user and only infrequently use AI.
Not so for serious prompt engineers.
The Field Of Hermeneutics
Hermeneutics is a topic that can be quite theoretical and philosophical. A famous philosopher in the 1900s, named Martin Heidegger, became especially known for his viewpoint that hermeneutics enables people to understand the world in more authentic ways.
One of his popular theories was that there is a crucial reciprocity between the text that we read and the context underlying the text. This became coined as the hermeneutic circle. Please be aware that entire books have been written to delve into the somewhat weighty considerations.
I’ll focus on the keystones of interest here.
When thinking in a hermeneutic manner, you are to first examine closely any text that you are given, then think mindfully about the context, and keep doing this repeatedly. Proceed circularly. By doing so, the aim is to incrementally reveal additional nuances and facets that you might not have detected via a single forward pass. Deciding how many circular passes are warranted requires a bit of adroit judgment.
It all goes toward the sage advice that savvy humans opt to see the whole for the parts, and the parts for the whole. If you only look at the tiny details, you’ll miss the big picture. If you only consider the big picture, the odds are that there is a devil in the details that will put you at a disadvantage. The prudent approach is to circularly assess the whole and the parts.
Getting AI To Do Likewise
The normal processing by generative AI regarding any entered prompt is to automatically plow ahead on a forward pass basis. Each word is explored in a conventional left-to-right kind of pattern analysis (this isn’t quite always the case, since modern-era LLMs employ parallel processing, but let’s go with the simpler 30,000-foot perspective).
Regrettably, the AI might miss seeing the bigger picture. The answer you get could be myopic. It could be right in some respects, but wrong or incomplete in other respects. If possible, it would be useful to encourage the AI to process in a circular fashion and arrive at a more complete and comprehensive answer.
We can prompt the AI to do exactly that.
Recommended Templates For The Prompt
I employ a short-form and a long-form version of this hermeneutic prompt, depending upon the circumstances at hand. I’ve used this prompt in a wide variety of major LLMs, including OpenAI’s ChatGPT and GPT-5, Anthropic Claude, xAI Grok, Meta Llama, Google Gemini, and others.
The two generic versions are as follows:
- Short-form version of the template: “I want you to apply Heidegger’s theory of the hermeneutic circle to interpret and answer the following question.”
- Long-form version of the template: “I want you to apply Heidegger’s theory of the hermeneutic circle to interpret and answer the following question. Move between the parts and the whole of the situation, considering how understanding each detail depends on the broader context and how the overall meaning emerges through that interplay. Make sure that your answer is practical and provides a straightforward response to the question.”
The reason that you might sometimes use the long-form version is that not all LLMs have necessarily been data trained on the meaning of the hermeneutic circle. The long-form explains to the AI what it is to undertake. In contrast, the short-form version can inadvertently cause the AI to give some bizarre answer due to the AI groping to figure out what you want done.
This rarely happens with major AIs.
In any case, the long-form works pretty much properly all the time, and ergo, you might just decide to always employ it. The downside is that you are unlikely to remember the wording off the top of your head. The short-form is quick to remember. The long-form will probably require a copy-and-paste effort.
Research Supports The Approach
I tend to prefer using prompting strategies that are based on empirical research.
There are tons of fly-by-night prompting approaches that do not have any particular rigor to them. It is somewhat reassuring to instead use prompts that have undergone a modicum of experimentation and produced tangible results. That’s mainly what I stick with.
In this case, there is an interesting research study on the hermeneutic method that provides a suitable backbone for believing in this prompt. The study is entitled “Prompting Meaning: A Hermeneutic Approach To Optimizing Prompt Engineering With ChatGPT” by Leah Henrickson, Albert Meroño‑Peñuela, AI & Society, 2025, and made these salient points (excerpts):
- “In this paper, we take an interdisciplinary approach that combines hermeneutics — the study of meaning and interpretation — with prompt engineering.”
- “We design prompts with the goal of optimizing hermeneuticity, rather than mere factual accuracy, and apply them in four different use cases combining humans and ChatGPT as readers and writers.”
- “This is neither a philosophy nor a technical paper. It is more accurately described as a preliminary exploration of how readers may derive meaning and hermeneuticity from generated texts.”
- “The AI seems to more precisely, almost surgically, meet our demands of a Heideggerian analysis using the hermeneutic circle.”
Example As Illustration
Let’s take a look at an example of what this special prompt can produce.
I logged into generative AI and decided to ask a question that is relatively straightforward. I pretended that I have a coworker who seems to frequently misunderstand what I talk to them about during our work efforts. First, I will ask the AI without using the hermeneutic prompt so that we can have a baseline for later comparison.
Here we go:
- My entered prompt: “How can I improve my communication with a coworker who often misunderstands my intentions?”
- Generative AI response: “Try to communicate more clearly by using simple, direct language. Avoid ambiguous phrasing and make sure your tone stays professional and calm. You can also summarize key points at the end of a conversation or follow up in writing to prevent misinterpretation.”
The response is sensible. I am instructed to use simpler language. I must engage in a calm and professional tone. I should make use of summarization. The answer is fine, and you probably would assume that nothing else needs to be said on the matter.
Using The Hermeneutic Prompt
Let’s see what else we might be able to get. I logged out of the session and got back in so that the AI was fresh and ready to go. I didn’t want the prior chat to be an influencer for my next test.
I opted to use the short-form of the templated prompt and then included the same question that I had used previously.
Here we go:
- My entered prompt: “I want you to apply Heidegger’s theory of the hermeneutic circle to interpret and answer the following question. How can I improve my communication with a coworker who often misunderstands my intentions?”
- Generative AI response: “To improve communication with your coworker, begin by recognizing that you will need to do more than merely being clearer in your phrasing. You need to step back and identify how you and your coworker can arrive at a shared understanding. How does your coworker interpret what you say? Open a discussion on making genuine communication a possibility. This will require careful and thoughtful reciprocal clarification.”
Mull over the response, and we will compare the first test to it.
Deciding On The Merits
The takeaways of the first test versus the second test are as follows. The hermeneutic prompt appears to have gone beyond the simpler notion of merely being clearer in how I say things. The AI now informs me that I need to take a broader view of the situation. I am to contemplate the best way to garner genuine communication with my coworker.
On the one hand, you might like this answer quite a lot. The new answer takes a larger viewpoint of the matter. But I suppose not everyone will like this response since it is somewhat more highbrow. The first response was very easy-peasy. This second response is somewhat more challenging in terms of getting the drift of what it means, plus the likely effort in carrying out what the AI has recommended in its response.
When To Use The Hermeneutic Prompt
Now that we’ve explored a quick example, let’s do a recap and wrap-up.
They say that if all you know is how to use a hammer, the entire world will appear to consist of nails that need hammering. The gist is that good prompting requires employing the right prompting technique at the right time and place. Do not always use the same prompting approach for all circumstances. Use different prompting techniques as appropriate.
When would be a good time to use the hermeneutic prompt?
Mainly, when the question at hand is relatively complex and likely contains a bushel of potential intricacies. If I am asking the AI to solve a problem that doesn’t have much complexity, I probably would not use the hermeneutic prompt. All you would be doing is causing the AI to excessively circle back and forth, and the added bang for the buck won’t be there. Any question or problem that deserves a thorough analysis by the AI is a good candidate for this new prompt.
A final thought for now.
The famous French mathematician Blaise Pascal made this insightful remark: “Man’s greatness lies in his power of thought.” The same might be said about generative AI (well, kind of) – you need to make sure to prompt AI suitably to fully invoke the power of thinking (as it were). Use the hermeneutic prompt whenever you aim to have the AI up its game and provide deeper thoughts for you to scrutinize.
Your mind will indubitably thank you.


