Tip #1 — Allow the model to fail. 🤷‍♀️

The fundamental principle of giving AI models permission to acknowledge their limitations. The "license to kill" — Ahm, I'm sorry, I mean… the "license to say I don't know" — is one of your weapons against hallucinations.

By explicitly allowing the model to admit when it doesn't know something or is uncertain, we create a more reliable and trustworthy interaction. This approach not only reduces the likelihood of incorrect responses but also helps establish a more honest dialogue between user and AI.

<aside>

Prompt: Please answer the following question as accurately as possible. If you don't know the answer or if the information is uncertain, it's okay to say, “I don't know.”.

<info>{INPUT DATA}</info>

<question>{QUESTION}</question>

</aside>

This approach helps create a more natural interaction where the AI can be upfront about its limitations.

Tip #2 — Use leading words. 🚏

Leading words are specific terms or phrases that guide the AI model's response in a particular direction. They act as signposts, helping to frame the context and shape the output in the desired way.

When strategically placing an incomplete sentence or structural element at the end of your prompts, these leading words serve as powerful directional cues that naturally guide the model toward completing the thought in a specific way. This method works by using the AI's natural tendency to complete thoughts in a logical way. When you leave an unfinished thought or pattern, the AI will try to complete it in a way that makes sense.

I mean, if we oversimplify it, at its core, Generative AI is essentially a sophisticated text prediction algorithm, right?

Here’s an example:

<aside>

Prompt: Generate an SQL query that retrieves the names and ages of users over 18 from the 'users' table.

SELECT name, age

</aside>

By providing SELECT name, age as a leading phrase, we're essentially giving the model a clear starting point. The AI will naturally continue with the FROM clause and WHERE conditions to complete a valid SQL query. This technique works particularly well for structured outputs like code, queries, or any format with a well-defined syntax.

The beauty of using leading words is that they subtly guide the AI without being overly prescriptive, allowing for both accuracy and flexibility in the response.

Tip #3 — Ask the model to evaluate a prompt and suggest changes 👩‍💻

We spend time crafting the perfect prompt to properly instruct the AI. But what if we ask the AI itself to help us perfect our prompts?

When a colleague mentioned trying this technique and getting the results we wanted, I was eager to try it myself. I've been amazed by how effectively this works. I shouldn’t be surprised. The truth is, AI knows how to talk to… AI.

Screenshot 2024-11-28 at 12.59.40.png

By asking the model to analyze and suggest improvements to your prompts, you can leverage its deep understanding of effective prompt structures and patterns. The model can help refine your prompt's structure and relevance, fix language issues like typos and grammar, and identify potential edge cases you might have overlooked.

This collaborative approach between human and AI can lead to more precise, efficient, and effective prompts that better achieve your desired outcomes.

Tip #4 — Use a mix of hard and soft prompts 🍬