Mastering prompt engineering
Version 1.0 - 05/14/2025

Table of Contents

Introduction

In today's digital age, maximizing the power of artificial intelligence through prompt engineering is becoming increasingly important. However, there are many sources of information that are spreading general, unsubstantiated advice or unclear explanations, which can easily lead to misunderstandings and ineffective applications. This results in wasted time and effort, while also reducing the ability to fully utilize the true potential of AI.

With over ten years of experience in the software technology field, along with research and synthesis from reputable sources such as OpenAI, Google, and Anthropic, I aim to share foundational knowledge and practical methods to optimize prompt design. My goal is to help you approach AI technology correctly, effectively, and to bring the highest value to your work and personal development projects.

Unofficial sources can easily make you believe in quick tips, but in reality, they are not effective. Or worse, they can lead you to misunderstand how AI works. Most of them apply zero-shot prompting and rely heavily on the model's capabilities. So, what is zero-shot prompting?

Zero-shot prompting

Zero-shot prompting is a technique for interacting with large language models (LLMs) without providing any illustrative examples. It relies on the LLM's ability to generalize knowledge.

Essentially, the way LLMs work is by predicting the next word based on context, so when they receive a clear prompt, they can apply learned knowledge to generate an appropriate response.

I don't think the zero-shot prompt technique is wrong. In fact, it would be amazing if we could work with AI using simple requests, almost without needing to understand how it works. However, at the current time, AI has not yet reached that level.

Advantages

1. Simplicity: Zero-shot prompting does not require additional data, making it valuable in cases where relevant data is hard to find or scarce.

2. Flexibility: Prompts are easy to adjust when necessary. Improving or updating prompts requires little effort.

3. Versatility: Zero-shot prompting demonstrates impressive ability in leveraging LLM knowledge across diverse fields.

Limitations

1. Heavy reliance on model capability: The model may not understand the prompt or lack knowledge in that area.

Example with Llama 3.1: Prompt: Write a 200-word joke in Vietnamese. The result is an endless token loop or a sequence of random characters.
Then, place this very document into context. Prompt: What is zero-shot prompting? Result: Zero-shot prompting is a technique in designing questions (prompts) for artificial intelligence (AI) models, especially large language models (LLMs), where the model is asked to perform a task without any specific examples of how to perform that task. Lyon and Garman (2000) defined zero-shot learning as a method where the model can apply knowledge learned from other tasks to a new task without retraining...
So, it's not that Llama 3.1 doesn't understand Vietnamese. In the document, I didn't write exactly what Llama answered. That's because I provided the full document on the answer I desired.

2. Dependence on prompt quality: The effectiveness of zero-shot prompting greatly depends on how clearly and specifically the prompt is written.

In the example above, I couldn't find information about Lyon and Garman on Google. So I had to be very clear about what I wanted.

Prompt: What is zero-shot prompt? Please answer briefly using the provided information in Vietnamese. Result: Zero-shot prompt is a technique that asks AI to perform a task without providing specific examples. AI will rely on existing knowledge from training to generate a response, without the need for reference data or specific user instructions. For example: "Write a poem about spring" without giving any example poems.

3. Difficulty with complex tasks: For tasks requiring complex reasoning or specific formats, zero-shot may not be as effective as other methods like few-shot.

Prompt: Combine the following letters to form a complete Vietnamese word: o / ã / h / h / n / à Result: The complete Vietnamese word formed from the letters: hoành

4. Inconsistent results: Without examples to guide, the model may produce different results for the same prompt, depending on the phrasing.

Still the same example, each time I execute, the result will be different.

1. The complete Vietnamese word that can be formed from the letters: hoàn hả. 2. The letters you provided are: o / ã / h / h / n / à A complete Vietnamese word that can be formed is: "hành hạ"; 3. The letters you provided are: o / ã / h / h / n / à. Combined into a complete Vietnamese word, one possible correct word is: "hào hãn"

Some applications where Zero-Shot Prompting can be effective

1. Text classification:

Classify the following paragraph into one of the categories: politics, economy, sports, entertainment: [paragraph]

2. Text summarization:

Summarize the following article in 3 main points: [article content]

3. Sentiment analysis:

Analyze the sentiment of the following customer review and indicate whether it is positive, negative, or neutral: [customer review]

4. Data format conversion:

Convert the following text into a JSON table with fields: name, age, occupation, and interests: [descriptive text]

Zero-shot prompting is very useful for simple tasks. I have waited a long time to be able to perform more complex tasks with just a few simple prompts. But to leverage AI today, we need to have a strategy.

Define Criteria

Before starting to design a prompt, the most important thing is to clearly define your goals and success criteria. Otherwise, you will evaluate the results subjectively and without measurement. Instead of letting the LLM find its way, specify exactly what you want and how to know when the goal is achieved.

Good criteria are:

Specific: Must be clear and precise about your goal.

Example: Write a 500-word blog post about the health benefits of coffee, aimed at a general audience. Include at least 3 references.

Measurable: Must have clear indicators or scales.

As in the example above, the success criteria are that the article must be 500 words long and include at least 3 references.

Achievable: Goals based on experience, industry benchmarks, or previous research. Do not set goals that are too high, beyond the current capabilities of the model.

Relevant: Align criteria according to your purpose and needs. High accuracy may be crucial for medical applications but less important for general chatbots.

The golden rule of clear prompts
Present the prompt as if explaining to a colleague who does not understand the task, then ask them to follow the instructions. If they are confused, the AI is likely to be as well.

Prompt Areas

Four key factors to consider for writing an effective prompt

To fully harness the power of artificial intelligence (AI), there are four core factors you need to focus on. These factors will make your prompts clear, precise, and aligned with your desired goals. When understood and applied well, you can easily create high-quality prompts, reduce AI misunderstandings, and enhance work efficiency.

1. Persona (Role or Character)

Assigning a role means clearly defining the AI's role in the prompt. By assigning a specific "character" or role to the AI, you help it understand the scope, style, and goals to achieve. For example:

• "You are a lawyer specializing in commercial contracts."

• "You are a data analyst in the banking sector."

• "You are a creative marketer."

2. Task (Mission or Work)

A clear task prompt helps the AI stay on track and focus on the desired outcome. Instead of being vague, you need to describe it clearly and specifically, as analyzed earlier.

3. Context (Context)

Context provides specific information, data, and situations related to the task. This factor helps the AI better understand the content, background, and relevant data to generate a more accurate and appropriate response.

4. Format (Format)

Format is the presentation style or type of response you expect. Depending on your purpose and the final result you need, defining the format clearly makes the outcome more straightforward and easier to use.

Persona - Role

Assigning a role (Persona) to AI is a crucial technique in designing effective prompts. When you place AI in a specific role, it will behave and respond according to the style and objectives of that role.

Why Assign a Role to AI?

Assigning a role to AI brings several important benefits:

1. Increased Accuracy: AI will focus on the expertise of the assigned role, minimizing errors.

2. Tone Adjustment: The response style will match the role (concise, formal, easy to understand).

3. Task Focus: AI understands the scope of work, avoiding distractions from irrelevant information.

Assigning the right role to AI is key to maximizing its potential. Simply by setting the right role, you can turn AI into an expert in a specific field, helping to generate accurate analyses and responses that meet your requirements. This is an effective way to save time, improve work quality, and maintain good control when working with AI.

In my opinion, adding personality and the relationship between AI and the user not only makes the session more lively but also creates a natural closeness in the conversation. When AI understands that it is playing a specific character with its own personality and a specific relationship with the user, it can more easily express clear viewpoints or opinions, while also aligning with the communication style you desire. In Vietnamese, the way of addressing and expressing attitudes has many nuances and styles, not just simple "You" or "Me" as in English. For example, when you want to build an intimate, close conversation, you can choose to address as "mày" instead of "bạn", and refer to yourself as "tao" instead of "tôi".

Example:

You are a picky investor. You and I are the same age, so let's address each other informally, refer to yourself as "tao" and call me "mày".

Context - Context

Context helps AI better understand the situation, goals, and scope of the task to be performed. A good context helps AI understand the "why" and "how," thereby providing the most appropriate and effective solutions.

Example: when you receive a call notifying you of a prize win. Normally, you would be very excited about such good news. But if recently, information about prize scams has been increasing, you would feel suspicious and unwilling to accept this call.

Why is context important?

Providing sufficient context offers many benefits:

1. Increased accuracy: AI can provide more appropriate answers when it understands the context of the problem

2. Reduced misunderstandings: Clear context helps AI avoid incorrect inferences

3. Optimized results: AI can focus on the most important and relevant information

How to optimize

1. Clear structure: Arrange information in a logical order, use appropriate headings and formatting

Example using markdown to index items

Below are the recent articles: ## Article 1 **Title of article 1 (bold)** Content of article 1 ## Article 2 **Title of article 2 (bold)** Content of article 2 ## Article 3 **Title of article 3 (bold)** Content of article 3

2. Selective information: Only include necessary information, avoid cluttering the context

3. Use appropriate formats: Markdown, XML, or delimiters to distinguish information sections

Example using XML to mark long text

<document> {{Long article content}} </document>

There is a race for context size that you may not notice. Current LLM models tend to increase context size significantly. The main purpose is to accommodate larger contexts. In practice, for AI to truly be an expert in a field, users tend to provide as much documentation as possible.

Large context and effective usage

In many complex tasks, providing sufficient long and clear context is key for AI to understand your requirements correctly. LLM models are increasingly powerful in handling long data sequences, helping maintain relevant information throughout the conversation or task.

1. Retain more information: In tasks requiring analysis of large data, complex questions, or large documents, a long context helps the model not miss important details.

2. Higher accuracy: With sufficient data, AI can synthesize and infer more correctly, avoiding misunderstandings or missing key parts.

3. More interesting for diverse tasks: From writing long content, data analysis, to solving problems, long-term conversations all require large context.

AI often forgets when working with large contexts. Similar to how we remember, the beginning and end are the most important parts. So with large contexts, place it in the middle. Also, repeat the request at the end of the prompt.

Format - Format

Format how the LLM responds in a way that suits your purpose. A good format will make the results clear, easy to use, and save time on later edits.

Common formats

Here are some simple and common ways to request AI to return results in the desired format:

1. JSON format

Use when you need structured data, easy to process for programming or analysis.

Example:

List countries with a population over 100 million, return in JSON format with fields: - name: country name, - population: population, - largest_city: largest city. Result: { "name": "China", "population": 1398000000, "largest_city": "Shanghai" }

2. Multiple choice answers

Use when you want multiple results to compare and find the best one.

Example:

Give me 10 titles for an article about coffee to attract young readers

3. Text or list of key points

This is the most basic form, but you should specify the presentation for AI to understand your requirements clearly.

Example:

Answer in a text paragraph under 500 words, divided into 3 small parts

Benefits of using appropriate formats

1. Increased consistency: Results are returned in a unified structure, easy to process and analyze

2. Time-saving: Minimize time spent on editing and reformatting results

3. Easy integration: Clearly structured results are easy to integrate with other systems and tools

So far, we have the structure of an effective prompt. For easy reference, I will summarize as follows: [Persona] You are an expert in... (Assign a specific role. Can add titles, personality) [Task] Create an article about... (Clear goal) [Context] Below is the relevant information... (Add information, mark text according to structure) [Format] Answer in... (Result format)

Chain of Thought

Chain of Thought - The Technique of Sequential Thinking

For complex tasks such as research, analysis, or problem-solving, you must give the LLM space to think, thereby significantly improving its performance. This technique, known as Chain of Thought (CoT), encourages the LLM to break down the problem step by step.

The simplest way: add the phrase "think step by step" to the prompt

This method lacks specific guidance on how to think. It will not be ideal for many cases.

Guide the thinking steps

Outline the steps for the LLM to follow during the thinking process.

Example: Think step by step before responding to the email: 1. First, think about messages that could attract the contributor based on their contribution history and the campaigns they have supported in the past. 2. Then, think about aspects of the Care for Kids program that would appeal to them, based on their history. 3. Finally, write an email tailored to the contributor using your analysis.

Separate thought from the answer

This will make it easier to debug and improve results. However, it will be unnecessary for models that know how to reason.

Example: Respond in JSON format including the following fields: 1. thought: your thought 2. answer: your answer

The Importance of Chain of Thought

This technique is one of the most mentioned in mainstream prompt documentation. It has become a standard in enhancing the accuracy and problem-solving capabilities of current LLM models.

Studies show that using Chain of Thought helps models handle problems requiring multi-step reasoning, significantly increasing accuracy, reducing errors, and providing more logical and consistent responses. This is no longer a new technique but has become a guiding principle for professional developers and prompt engineers.

Google's research shows that large language models (LLMs) often respond poorly to negative instructions such as "Don't do this" or "Don't do that." So, instead of using negative instructions, you should guide the AI specifically and clearly on how to achieve the desired result. For example, instead of saying "Don't write long," say "Write concisely in 3 main sentences." This helps the AI understand the direction better, reduces misunderstandings, and provides more accurate responses.

Few-shot prompting

Few-shot prompting is a technique for interacting with large language models (LLMs) by providing a few clear examples in the prompt before asking the model to perform the main task. Instead of giving a single command (as in zero-shot), few-shot prompting helps the model better understand how to process and format the desired output by showing it some specific examples in advance.

In this method, you include sample examples in the prompt, clearly describing the input and expected output. When the model sees these examples, its ability to predict accurately increases significantly, especially in tasks requiring complex reasoning, format handling, or specific requirements.

Advantages

1. Improved accuracy: Examples help the model better understand the required expression, format, or content, reducing incorrect or irrelevant responses.

2. High flexibility and customization: Easily add or modify examples to suit different purposes.

3. No need for large datasets: Only a few small examples are needed, without requiring model retraining as in finetuning.

Limitations

Requires clear example design: Examples need to be clear, relevant, and accurate to avoid misunderstandings.

Few-Shot Prompting Example

Here are some examples of classifying paragraphs into categories: Politics, Economy, Sports, Entertainment. Example 1: "National parliamentary elections are taking place." > Politics Example 2: "The stock market has grown strongly this quarter." > Economy Example 3: "The World Cup final match just took place." > Sports Now, classify the following paragraph: "[paragraph]"

Thus, combining all the most powerful techniques. We have the final structure template as follows: [Persona] You are an expert in... (Assign a specific role. Can add titles, personality) [Task] Write an article about... (Clear objective) [Context] Here is the relevant information... (Add information, mark text according to structure) [Examples] (Illustrative examples) [Guidelines] (Step-by-step thinking guidance) [Task] (Repeat the task if the context is long) [Format] Respond in... (Result format)

System Prompt

A system prompt is a command or instruction set for the LLM model from the very beginning to shape how it responds throughout the entire conversation or task. It serves as a "framework" or "general rule" for the AI to understand its style, scope, and purpose of operation in that communication session.

Role of System Prompt

1. Guiding AI Behavior: Helps the AI understand the scope, tone, and response style appropriate for your purpose.

2. Maintaining Consistency: In long conversations or multiple interactions, the system prompt helps the AI maintain a consistent response style, avoiding deviation or loss of focus.

3. Content Control and Constraints: Can set rules, limits, or minimum standards for the AI's responses, such as avoiding sensitive or inappropriate content.

4. Optimizing AI Usage Efficiency: When you set the system prompt correctly, the AI will respond more accurately and appropriately compared to not using it or using it incorrectly.

Common Mistakes Today

AI applications often hide this part to simplify the user experience. This leads users to often include all prompts in regular messages. The role and power of the system prompt compared to regular prompts are completely different in a conversation. The prompt you have carefully crafted, when placed in a regular message, will weaken with each response and will not be as valued as a system prompt. For ChatGPT, you can find it in Custom GPT or OpenAI Platform. For Anthropic, you can find it in Anthropic Console.

FeelAI Bot Builder Provides a Playground for You

Easy to change settings

Diverse models, many free models

Experience many pre-built tools

Integration with other platforms

Share bots with friends, colleagues

System prompt

Theoretical Models

Thinking LLMs are a special form of artificial intelligence, excelling in solving complex problems using logical reasoning and structured thinking, far beyond mere models. They can analyze problems, explore different approaches, and validate solutions, often involving the "chain-of-thought" process, where they "think" through each step of the problem before providing an answer.

Key Features

1. Logical Reasoning: These models do not just rely on predictions from data patterns but also use reasoning and inference to arrive at more accurate answers.

2. Structured Thinking: Often applying the "chain-of-thought" method, breaking down problems into smaller steps and explaining their thought process.

3. Problem Analysis: Capable of breaking down complex problems into smaller, more manageable parts.

4. Solution Validation: Flexibly trying different approaches to find the most optimal method and confirm the correctness of the solution.

5. Backtracking: When a path leads to a dead end, these models can backtrack and try another way to achieve the result.

6. Enhanced Problem-Solving: Particularly suitable for tasks requiring logical reasoning, mathematical computation, or programming.

Particularly suitable for tasks requiring logical reasoning, mathematical computation, or programming.

Understanding and applying reasoning models in AI helps us solve complex problems more systematically and efficiently, especially in fields requiring logical thinking and deep analysis.

Experience with FeelAI Bot Builder

FeelAI Bot Builder offers various reasoning models for you to experiment with. Change the conversation to see the difference.

Reasoning models
Reasoning models

Conclusion

I sincerely thank you for taking the time to read the contents of this document. I hope my sharing will help you better understand how to work effectively with AI.

To conclude, I would like to share more about my impressions of the current LLM models. Although I have tested many, it is still a subjective evaluation. I hope to help you quickly find the model that suits your needs.

Claude

Claude is the most powerful model for creativity and content creation with the best contextual understanding. Although the price is much higher than other models, it is worth every penny. I have tried many other models for the same task and realized this.

Gemini

With low cost, good infrastructure, and fast speed, Gemini is the best model for high-speed tasks. Although it often skips many parts of the context, it always ensures data types, making it very suitable for system building.

Grok

Grok is the smartest and most emotional model among the current models. It is very good at content creation and discussing new ideas.

Deepseek

In natural language processing, it is not as good as Claude but much better than Gemini. However, a big downside is the unstable infrastructure leading to slow speed. It can be used as a substitute for Claude to save costs.

Qwen

Qwen is quite strong in natural language processing, though not as good as Claude but more stable than Deepseek. Qwen's speed is quite fast, and the price is low. I often use it as a substitute for Gemini in system tasks.

It is important to remember that there is no one-size-fits-all formula. Designing prompts does not require deep knowledge of AI models but rather the context of use and the specific goals of each situation. We need to continuously experiment, evaluate, and adjust to achieve the best results.

I hope you understand that prompt is not just a technical skill but also an art. It requires creativity, patience, and critical thinking. Do not hesitate to experiment with new ideas and share your experiences with the community.

Finally, always maintain a spirit of learning and updating. The field of AI is developing at a rapid pace, and new techniques will continue to emerge. Mastering the basic principles will help you easily adapt and apply new advancements in the future.

Please contact me directly, I will be very happy to assist you with your specific tasks.